Duration | 6698.0 sec |
---|---|
Test Cases | 2674 |
Failures | 26 |
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:44.274: INFO: Driver cinder doesn't support ext3 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:43.900: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:43.282: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:40.302: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:39.947: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:39.803: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:35.381: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:34.968: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Oct 13 10:09:10.202: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:09:10.389934 59853 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:09:10.390: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ostest-n5rnf-worker-0-8kq82" using path "/tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073" Oct 13 10:09:12.458: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073 && dd if=/dev/zero of=/tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 13 10:09:12.659: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 13 10:09:12.828: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073 && chmod o+rwx /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Creating local PVCs and PVs Oct 13 10:09:13.110: INFO: Creating a PV followed by a PVC Oct 13 10:09:13.160: INFO: Waiting for PV local-pvs7blb to bind to PVC pvc-dfjr2 Oct 13 10:09:13.160: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dfjr2] to have phase Bound Oct 13 10:09:13.177: INFO: PersistentVolumeClaim pvc-dfjr2 found but phase is Pending instead of Bound. Oct 13 10:09:15.184: INFO: PersistentVolumeClaim pvc-dfjr2 found but phase is Pending instead of Bound. Oct 13 10:09:17.242: INFO: PersistentVolumeClaim pvc-dfjr2 found but phase is Pending instead of Bound. Oct 13 10:09:19.249: INFO: PersistentVolumeClaim pvc-dfjr2 found but phase is Pending instead of Bound. Oct 13 10:09:21.256: INFO: PersistentVolumeClaim pvc-dfjr2 found and phase=Bound (8.096047407s) Oct 13 10:09:21.256: INFO: Waiting up to 3m0s for PersistentVolume local-pvs7blb to have phase Bound Oct 13 10:09:21.263: INFO: PersistentVolume local-pvs7blb found and phase=Bound (7.132318ms) [BeforeEach] Set fsGroup for local volume k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286 Oct 13 10:09:21.270: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 13 10:09:21.270: INFO: Deleting PersistentVolumeClaim "pvc-dfjr2" Oct 13 10:09:21.278: INFO: Deleting PersistentVolume "local-pvs7blb" Oct 13 10:09:21.293: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 13 10:09:21.453: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Tear down block device "/dev/loop0" on node "ostest-n5rnf-worker-0-8kq82" at path /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073/file Oct 13 10:09:21.587: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Removing the test directory /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073 Oct 13 10:09:21.749: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fb4ced7e-8dc2-4911-af43-97302e52b073] Namespace:e2e-persistent-local-volumes-test-3610 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-mkhfv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} [AfterEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistent-local-volumes-test-3610" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:09.665: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:09.277: INFO: Driver "csi-hostpath" does not support topology - skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:08.815: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:08.381: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:07.947: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:07.560: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:07.173: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:06.774: INFO: Driver csi-hostpath doesn't support ext3 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:06.421: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:09:06.019: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:55.507: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:55.194: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:51.887: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:51.848: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:51.535: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:51.156: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:46.585: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:46.235: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:45.859: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:21.685: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:21.259: INFO: Driver nfs doesn't publish storage capacity -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver nfs doesn't publish storage capacity -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:15.657: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:10.429: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:09.942: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:09.467: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:08.980: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:08.487: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:08.055: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49 Oct 13 10:08:01.548: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:01.177: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:00.809: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:08:00.380: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:59.907: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:59.482: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:56.124: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:55.769: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:53.584: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:53.011: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:52.639: INFO: Driver nfs doesn't support ext4 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 10:07:52.761: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:07:53.054221 56481 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:07:53.054: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 10:07:53.059: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-6933" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:52.340: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:52.128: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:51.756: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:51.416: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:34.729: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:34.362: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:33.996: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:25.253: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:24.775: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:24.592: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:24.386: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:24.247: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:23.999: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:23.920: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:23.663: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumelimits Oct 13 10:07:22.307: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:07:22.544231 55490 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:07:22.544: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that all csinodes have volume limits [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:238 [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumelimits-7902" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:241]: driver cinder does not support volume limits
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:21.703: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:21.360: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:14.721: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 10:07:14.099: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:07:14.301712 54940 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:07:14.301: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 10:07:14.309: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-95" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:13.506: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:13.176: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:12.959: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:12.852: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:12.481: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:12.048: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:11.678: INFO: Driver "nfs" does not support topology - skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:11.383: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:11.080: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:10.749: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:10.362: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:09.960: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:07:09.580: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:52.406: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:52.021: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:51.660: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:51.310: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:50.861: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:50.410: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:49.988: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:40.183: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:39.818: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:39.452: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:39.076: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:38.724: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:38.322: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:37.920: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:32.698: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:32.348: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:31.983: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:31.596: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:31.221: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:30.852: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:30.499: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:30.216: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:29.888: INFO: Driver nfs doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:28.330: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:27.995: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:06:26.267: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:59.835: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:59.508: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:59.140: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:58.631: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:58.099: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:57.759: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:57.464: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Operations Storm [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-ops-storm Oct 13 10:05:56.800: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:05:57.126686 51322 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:05:57.126: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Operations Storm [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_ops_storm.go:66 Oct 13 10:05:57.131: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Operations Storm [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-ops-storm-2109" for this suite. [AfterEach] [sig-storage] Volume Operations Storm [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_ops_storm.go:80 STEP: Deleting PVCs STEP: Deleting StorageClass skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_ops_storm.go:67]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:55.394: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:30.445: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:30.008: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:29.656: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:29.307: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:28.915: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:28.494: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:28.116: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-provision Oct 13 10:05:25.550: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:05:25.794541 50451 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:05:25.794: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:52 Oct 13 10:05:25.802: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-provision-6754" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:24.743: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:17.478: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:17.148: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:16.781: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:16.419: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:16.131: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:03.272: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:03.261: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:02.973: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:02.838: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:02.617: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:02.528: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:02.277: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:02.220: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:05:01.946: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Oct 13 10:04:07.271: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:04:07.530687 47341 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:04:07.530: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ostest-n5rnf-worker-0-94fxs" using path "/tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373" Oct 13 10:04:09.628: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373 && dd if=/dev/zero of=/tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 13 10:04:09.792: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Creating local PVCs and PVs Oct 13 10:04:09.930: INFO: Creating a PV followed by a PVC Oct 13 10:04:09.956: INFO: Waiting for PV local-pvcfbdv to bind to PVC pvc-8xh4r Oct 13 10:04:09.956: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8xh4r] to have phase Bound Oct 13 10:04:09.965: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound. Oct 13 10:04:11.975: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound. Oct 13 10:04:13.986: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound. Oct 13 10:04:15.992: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound. Oct 13 10:04:17.998: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound. Oct 13 10:04:20.004: INFO: PersistentVolumeClaim pvc-8xh4r found but phase is Pending instead of Bound. Oct 13 10:04:22.013: INFO: PersistentVolumeClaim pvc-8xh4r found and phase=Bound (12.056430146s) Oct 13 10:04:22.013: INFO: Waiting up to 3m0s for PersistentVolume local-pvcfbdv to have phase Bound Oct 13 10:04:22.018: INFO: PersistentVolume local-pvcfbdv found and phase=Bound (4.999665ms) [BeforeEach] Set fsGroup for local volume k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286 Oct 13 10:04:22.037: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 13 10:04:22.038: INFO: Deleting PersistentVolumeClaim "pvc-8xh4r" Oct 13 10:04:22.050: INFO: Deleting PersistentVolume "local-pvcfbdv" Oct 13 10:04:22.069: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Tear down block device "/dev/loop0" on node "ostest-n5rnf-worker-0-94fxs" at path /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373/file Oct 13 10:04:22.261: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Removing the test directory /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373 Oct 13 10:04:22.432: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e7721403-4304-41f9-b20a-3ab4102bd373] Namespace:e2e-persistent-local-volumes-test-4197 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-69tbm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} [AfterEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistent-local-volumes-test-4197" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:04:06.615: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:59.653: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:59.273: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:55.246: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:54.941: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:54.854: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumemode Oct 13 10:03:54.205: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:03:54.416742 47097 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:03:54.416: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352 Oct 13 10:03:54.420: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumemode-8416" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:53.689: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:53.308: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Oct 13 10:03:50.666: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:03:50.961501 47027 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:03:50.961: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "ostest-n5rnf-worker-0-j4pkp" at path "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4" Oct 13 10:03:53.060: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4" "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4"] Namespace:e2e-persistent-local-volumes-test-6456 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-wxb9f ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Creating local PVCs and PVs Oct 13 10:03:53.221: INFO: Creating a PV followed by a PVC Oct 13 10:03:53.249: INFO: Waiting for PV local-pvfmb5b to bind to PVC pvc-8g8x4 Oct 13 10:03:53.249: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8g8x4] to have phase Bound Oct 13 10:03:53.255: INFO: PersistentVolumeClaim pvc-8g8x4 found but phase is Pending instead of Bound. Oct 13 10:03:55.273: INFO: PersistentVolumeClaim pvc-8g8x4 found and phase=Bound (2.02414413s) Oct 13 10:03:55.273: INFO: Waiting up to 3m0s for PersistentVolume local-pvfmb5b to have phase Bound Oct 13 10:03:55.281: INFO: PersistentVolume local-pvfmb5b found and phase=Bound (8.273116ms) [BeforeEach] Set fsGroup for local volume k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286 Oct 13 10:03:55.297: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 13 10:03:55.298: INFO: Deleting PersistentVolumeClaim "pvc-8g8x4" Oct 13 10:03:55.315: INFO: Deleting PersistentVolume "local-pvfmb5b" STEP: Unmount tmpfs mount point on node "ostest-n5rnf-worker-0-j4pkp" at path "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4" Oct 13 10:03:55.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4"] Namespace:e2e-persistent-local-volumes-test-6456 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-wxb9f ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Removing the test directory Oct 13 10:03:55.517: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad2921ed-595e-4a29-a1a0-a9e40a7d5bb4] Namespace:e2e-persistent-local-volumes-test-6456 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-wxb9f ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} [AfterEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistent-local-volumes-test-6456" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:50.186: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:40.890: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-disk-format Oct 13 10:03:40.292: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:03:40.506380 46682 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:03:40.506: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:70 Oct 13 10:03:40.509: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-disk-format-7783" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:30.619: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:30.276: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:26.924: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:06.091: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:05.750: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:03:05.363: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:52.372: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:52.057: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:51.737: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:51.413: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:48.919: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:48.591: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Provisioning on Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-datastore Oct 13 10:02:47.784: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:02:48.067441 44078 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:02:48.067: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Provisioning on Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_datastore.go:60 Oct 13 10:02:48.075: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Provisioning on Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-datastore-3006" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_datastore.go:61]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:47.187: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:45.970: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:22.076: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:17.693: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:12.610: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:12.278: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:11.945: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:11.637: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:07.339: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:06.987: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:06.627: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:06.272: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:05.922: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:04.530: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:02:04.177: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:48.800: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:48.284: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:47.841: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:47.419: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:29.795: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:29.423: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:19.890: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 10:01:19.729: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 10:01:19.934220 40849 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:01:19.934: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 10:01:19.940: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-3265" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:19.538: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:11.640: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:11.179: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:10.766: INFO: Driver "csi-hostpath" does not support topology - skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:03.844: INFO: Driver hostPath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:03.549: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:03.414: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:03.117: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:02.676: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:02.266: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:01:01.569: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:33.879: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:26.164: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:24.500: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:23.531: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:07.636: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:07.318: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:06.917: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:06.486: INFO: Driver "nfs" does not support topology - skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:06.105: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 10:00:05.739: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:52.634: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:48.419: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:47.914: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:47.560: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:46.987: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:46.511: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] GKE local SSD [Feature:GKELocalSSD] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename localssd Oct 13 09:59:46.867: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:59:47.118750 36553 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:59:47.118: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] GKE local SSD [Feature:GKELocalSSD] k8s.io/kubernetes@v1.22.1/test/e2e/storage/gke_local_ssd.go:37 Oct 13 09:59:47.130: INFO: Only supported for providers [gke] (not openstack) [AfterEach] [sig-storage] GKE local SSD [Feature:GKELocalSSD] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-localssd-9872" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/gke_local_ssd.go:38]: Only supported for providers [gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:32.023: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:31.619: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:31.233: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:30.763: INFO: Driver cinder doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:30.367: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:30.014: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:26.668: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:26.213: INFO: Driver "cinder" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:25.873: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:25.713: INFO: Driver "nfs" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:15.938: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:08.208: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:07.725: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:07.307: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:06.774: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:06.354: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:05.993: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:05.888: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:59:05.483: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:56.769: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:55.439: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:55.034: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:54.665: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:54.257: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:33.160: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:32.094: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:31.879: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:58:31.330: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:58:31.573324 33702 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:58:31.573: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:58:31.578: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-3120" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:30.686: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:30.242: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:29.818: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:28.307: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:18.403: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:08.686: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:08.392: INFO: Driver csi-hostpath doesn't publish storage capacity -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver csi-hostpath doesn't publish storage capacity -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:01.035: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:58:00.701: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pv Oct 13 09:57:59.953: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:58:00.238840 32420 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:58:00.238: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:77 Oct 13 09:58:00.243: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-pv-4725" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:111 Oct 13 09:58:00.284: INFO: AfterEach: Cleaning up test resources Oct 13 09:58:00.284: INFO: pvc is nil Oct 13 09:58:00.284: INFO: pv is nil skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:59.458: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:49.322: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:48.976: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:47.714: INFO: Driver "csi-hostpath" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:47.369: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:46.935: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:44.969: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:43.795: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:43.434: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:42.966: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:42.611: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:42.190: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:41.713: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:41.277: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:40.820: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:40.388: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:32.540: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:31.208: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:26.670: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:26.301: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:57:17.794: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:57:18.036147 30361 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:57:18.036: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:57:18.042: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-9800" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:17.242: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49 Oct 13 09:57:16.859: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:16.544: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:07.067: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:06.740: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:06.357: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:06.025: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:04.299: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:03.990: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:03.674: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:57:03.070: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:57:03.298752 29519 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:57:03.298: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:57:03.302: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-139" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumemode Oct 13 09:57:02.177: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:57:02.411711 29503 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:57:02.411: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352 Oct 13 09:57:02.415: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumemode-6752" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:01.618: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:57:01.270: INFO: Driver emptydir doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:57.036: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:56.644: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:56.280: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:55.857: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:55.479: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:55.062: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:54.758: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:54.419: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:53.189: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:51.935: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:51.544: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:51.138: INFO: Driver nfs doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:50.690: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] SCTP [Feature:SCTP] [LinuxOnly] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename sctp Oct 13 09:56:47.522: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:56:47.726486 28933 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:56:47.726: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] SCTP [Feature:SCTP] [LinuxOnly] k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:3220 [It] should create a ClusterIP Service with SCTP ports [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:3332 STEP: checking that kube-proxy is in iptables mode Oct 13 09:56:47.785: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 13 09:56:49.801: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 13 09:56:49.812: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-sctp-1997 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 13 09:56:50.197: INFO: rc: 7 Oct 13 09:56:50.225: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 13 09:56:50.236: INFO: Pod kube-proxy-mode-detector no longer exists Oct 13 09:56:50.236: INFO: Couldn't detect KubeProxy mode - skip, error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-sctp-1997 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: Command stdout: stderr: + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode command terminated with exit code 7 error: exit status 7 [AfterEach] [sig-network] SCTP [Feature:SCTP] [LinuxOnly] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-sctp-1997" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:3335]: Couldn't detect KubeProxy mode - skip, error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-sctp-1997 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: Command stdout: stderr: + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode command terminated with exit code 7 error: exit status 7
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Oct 13 09:56:40.924: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:56:41.165085 28899 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:56:41.165: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 13 09:56:43.252: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918-backend && ln -s /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918-backend /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918] Namespace:e2e-persistent-local-volumes-test-9355 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-h5cpn ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Creating local PVCs and PVs Oct 13 09:56:43.395: INFO: Creating a PV followed by a PVC Oct 13 09:56:43.418: INFO: Waiting for PV local-pvwt8fr to bind to PVC pvc-9sxvv Oct 13 09:56:43.418: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9sxvv] to have phase Bound Oct 13 09:56:43.423: INFO: PersistentVolumeClaim pvc-9sxvv found but phase is Pending instead of Bound. Oct 13 09:56:45.432: INFO: PersistentVolumeClaim pvc-9sxvv found and phase=Bound (2.014087942s) Oct 13 09:56:45.433: INFO: Waiting up to 3m0s for PersistentVolume local-pvwt8fr to have phase Bound Oct 13 09:56:45.436: INFO: PersistentVolume local-pvwt8fr found and phase=Bound (3.829255ms) [BeforeEach] Set fsGroup for local volume k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286 Oct 13 09:56:45.450: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 13 09:56:45.450: INFO: Deleting PersistentVolumeClaim "pvc-9sxvv" Oct 13 09:56:45.463: INFO: Deleting PersistentVolume "local-pvwt8fr" STEP: Removing the test directory Oct 13 09:56:45.489: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918 && rm -r /tmp/local-volume-test-8576da6b-12b8-42ee-9e65-28944b72e918-backend] Namespace:e2e-persistent-local-volumes-test-9355 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-h5cpn ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} [AfterEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistent-local-volumes-test-9355" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:56:40.119: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:56:40.323221 28884 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:56:40.323: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:56:40.326: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-5629" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:39.407: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:38.990: INFO: Driver nfs doesn't support Block -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Flexvolumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename flexvolume Oct 13 09:56:38.185: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:56:38.556899 28844 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:56:38.557: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:169 Oct 13 09:56:38.577: INFO: Only supported for providers [gce local] (not openstack) [AfterEach] [sig-storage] Flexvolumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-flexvolume-6438" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:170]: Only supported for providers [gce local] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:37.553: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:37.120: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:36.722: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:36.312: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:35.794: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:32.104: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:16.224: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:10.633: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:10.330: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:10.033: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:09.707: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:56:06.660: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:31.944: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:31.616: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:31.279: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:30.945: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:25.293: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:24.953: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:14.862: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:14.544: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-node] AppArmor k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename apparmor Oct 13 09:55:13.856: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:55:14.180820 25532 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:55:14.180: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles k8s.io/kubernetes@v1.22.1/test/e2e/node/apparmor.go:32 Oct 13 09:55:14.200: INFO: Only supported for node OS distro [gci ubuntu] (not custom) [AfterEach] load AppArmor profiles k8s.io/kubernetes@v1.22.1/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-apparmor-5501" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/framework/skipper/skipper.go:291]: Only supported for node OS distro [gci ubuntu] (not custom)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:13.228: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:12.896: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:12.577: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:12.189: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:11.808: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:11.488: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:11.156: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:08.469: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:55:07.757: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:55:08.054236 25271 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:55:08.054: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:55:08.059: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-5053" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:07.145: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:06.731: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:06.334: INFO: Driver "cinder" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:05.923: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:05.565: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:05.239: INFO: Driver emptydir doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:55:04.800: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:53.445: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:47.158: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:43.596: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:43.158: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:42.754: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:42.365: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:25.349: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:24.992: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:24.512: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:24.146: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:23.967: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:23.626: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:23.271: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:22.895: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:22.607: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:22.307: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:22.037: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:21.738: INFO: Driver "nfs" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:21.431: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:54:21.146: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:49.065: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:48.706: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:48.279: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:47.868: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:47.427: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:47.050: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:46.709: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:33.822: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:33.413: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:33.070: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:32.960: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:32.552: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:32.022: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume Oct 13 09:53:25.640: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:53:25.872810 21053 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:53:25.872: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow exec of files on the volume [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:196 Oct 13 09:53:25.878: INFO: Driver "hostPathSymlink" does not support exec - skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-1319" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "hostPathSymlink" does not support exec - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:24.921: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:24.482: INFO: Driver csi-hostpath doesn't support ext3 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:16.624: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:14.987: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:14.619: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:14.251: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:13.966: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:13.857: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:13.562: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:05.061: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:04.674: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:04.328: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:03.126: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:02.733: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:02.408: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:53:01.407: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:36.135: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:52:30.628: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:52:30.887790 18638 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:52:30.887: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:52:30.893: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-3559" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:21.841: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:52:21.758: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:52:21.975020 18566 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:52:21.975: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:52:21.982: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-7184" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:21.509: INFO: Driver "csi-hostpath" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:21.160: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:20.845: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:20.514: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:20.176: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:19.851: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:19.471: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:19.095: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:18.129: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Oct 13 09:52:15.934: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:52:16.145302 18284 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:52:16.145: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "ostest-n5rnf-worker-0-8kq82" using path "/tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe" Oct 13 09:52:18.238: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe && dd if=/dev/zero of=/tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 13 09:52:18.421: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Creating local PVCs and PVs Oct 13 09:52:18.592: INFO: Creating a PV followed by a PVC Oct 13 09:52:18.642: INFO: Waiting for PV local-pvjnn2c to bind to PVC pvc-fg46f Oct 13 09:52:18.642: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fg46f] to have phase Bound Oct 13 09:52:18.678: INFO: PersistentVolumeClaim pvc-fg46f found but phase is Pending instead of Bound. Oct 13 09:52:20.683: INFO: PersistentVolumeClaim pvc-fg46f found and phase=Bound (2.040523355s) Oct 13 09:52:20.683: INFO: Waiting up to 3m0s for PersistentVolume local-pvjnn2c to have phase Bound Oct 13 09:52:20.686: INFO: PersistentVolume local-pvjnn2c found and phase=Bound (3.621209ms) [BeforeEach] Set fsGroup for local volume k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261 Oct 13 09:52:20.694: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 13 09:52:20.694: INFO: Deleting PersistentVolumeClaim "pvc-fg46f" Oct 13 09:52:20.705: INFO: Deleting PersistentVolume "local-pvjnn2c" Oct 13 09:52:20.729: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Tear down block device "/dev/loop0" on node "ostest-n5rnf-worker-0-8kq82" at path /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe/file Oct 13 09:52:20.869: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Removing the test directory /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe Oct 13 09:52:21.001: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d7c33441-feb1-4157-b1b4-545a6c39ffbe] Namespace:e2e-persistent-local-volumes-test-1836 PodName:hostexec-ostest-n5rnf-worker-0-8kq82-5hczf ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} [AfterEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistent-local-volumes-test-1836" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:263]: We don't set fsGroup on block device, skipped.
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:15.492: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:15.157: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:14.848: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:08.753: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:08.423: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:08.149: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-apps] ReplicaSet k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename replicaset Oct 13 09:52:07.500: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:52:07.750848 17854 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:52:07.750: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a private image [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/apps/replica_set.go:113 Oct 13 09:52:07.757: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-apps] ReplicaSet k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-replicaset-1554" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/replica_set.go:115]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistentvolumereclaim Oct 13 09:52:04.854: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:52:05.105156 17828 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:52:05.105: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:47 [BeforeEach] persistentvolumereclaim:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:54 Oct 13 09:52:05.111: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] persistentvolumereclaim:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:63 STEP: running testCleanupVSpherePersistentVolumeReclaim [AfterEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistentvolumereclaim-6513" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:52:04.294: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:44.708: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:44.321: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:43.881: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:26.965: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:26.599: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:26.211: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:16.746: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:16.423: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:16.123: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:15.472: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:51:15.523: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:51:15.738224 15766 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:51:15.738: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:51:15.741: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-6943" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:14.983: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:51:14.799: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:51:15.085335 15731 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:51:15.085: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:51:15.088: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-1513" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:14.294: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:51:14.408: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:51:14.627518 15703 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:51:14.627: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:51:14.634: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-4899" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:13.952: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:13.798: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:13.607: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:13.526: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:13.242: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:13.126: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename topology Oct 13 09:51:13.225: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:51:13.405591 15605 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:51:13.405: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:192 Oct 13 09:51:13.421: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:nova] Oct 13 09:51:13.421: INFO: In-tree plugin kubernetes.io/cinder is not migrated, not validating any metrics Oct 13 09:51:13.421: INFO: Not enough topologies in cluster -- skipping STEP: Deleting pvc STEP: Deleting sc [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-topology-25" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:199]: Not enough topologies in cluster -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:12.857: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:12.449: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:12.019: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:11.579: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:06.605: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:06.239: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:51:05.858: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:50:58.028: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:50:58.226298 14720 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:50:58.226: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with pvc data source [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:239 Oct 13 09:50:58.233: INFO: Driver "nfs" does not support cloning - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-1802" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "nfs" does not support cloning - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:57.374: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:27.430: INFO: Driver "nfs" does not support topology - skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:26.987: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:26.617: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:26.226: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:24.028: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:12.129: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:11.696: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:11.271: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:10.835: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:08.658: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-placement Oct 13 09:50:07.987: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:50:08.188434 12893 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:50:08.188: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55 Oct 13 09:50:08.197: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-placement-6470" for this suite. [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:06.351: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:05.952: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:50:01.377: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:59.544: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:27.665: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:27.222: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:26.878: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:25.146: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:21.615: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:21.284: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:20.940: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:20.610: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:20.285: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:19.967: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:19.641: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:16.806: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:12.106: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:11.762: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:11.437: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/networking.go:85]: Unexpected error: <*errors.errorString | 0xc002a0ba70>: { s: "pod \"connectivity-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.196.2.169 PodIP:10.128.174.211 PodIPs:[{IP:10.128.174.211}] StartTime:2022-10-13 09:49:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-10-13 09:49:33 +0000 UTC,FinishedAt:2022-10-13 09:50:03 +0000 UTC,ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920 Started:0xc002396fd5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } pod "connectivity-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.196.2.169 PodIP:10.128.174.211 PodIPs:[{IP:10.128.174.211}] StartTime:2022-10-13 09:49:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-10-13 09:49:33 +0000 UTC,FinishedAt:2022-10-13 09:50:03 +0000 UTC,ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920 Started:0xc002396fd5}] QOSClass:BestEffort EphemeralContainerStatuses:[]} occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] Networking k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename nettest Oct 13 09:49:08.484: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:49:08.700481 10397 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:49:08.700: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide Internet connection for containers [Feature:Networking-IPv4] [Skipped:Disconnected] [Skipped:azure] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/network/networking.go:83 STEP: Running container which tries to connect to 8.8.8.8 Oct 13 09:49:08.742: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "e2e-nettest-7251" to be "Succeeded or Failed" Oct 13 09:49:08.748: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.277044ms Oct 13 09:49:10.752: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01012135s Oct 13 09:49:12.764: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021428112s Oct 13 09:49:14.778: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035815163s Oct 13 09:49:16.786: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043217684s Oct 13 09:49:18.809: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066816996s Oct 13 09:49:20.820: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.077917742s Oct 13 09:49:22.856: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.11393818s Oct 13 09:49:24.868: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.125807143s Oct 13 09:49:26.877: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.135003283s Oct 13 09:49:28.899: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.156629149s Oct 13 09:49:30.908: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.166013815s Oct 13 09:49:32.913: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.17085184s Oct 13 09:49:34.929: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 26.186318883s Oct 13 09:49:36.935: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 28.192212983s Oct 13 09:49:38.940: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 30.197728174s Oct 13 09:49:40.947: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 32.204458377s Oct 13 09:49:42.952: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 34.209981506s Oct 13 09:49:44.958: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 36.2154586s Oct 13 09:49:46.962: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 38.219572766s Oct 13 09:49:48.978: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 40.235607207s Oct 13 09:49:50.992: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 42.250059674s Oct 13 09:49:53.004: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 44.261910632s Oct 13 09:49:55.023: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 46.280628459s Oct 13 09:49:57.032: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 48.289490479s Oct 13 09:49:59.044: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 50.301753338s Oct 13 09:50:01.050: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 52.307451321s Oct 13 09:50:03.056: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=true. Elapsed: 54.314059414s Oct 13 09:50:05.068: INFO: Pod "connectivity-test": Phase="Running", Reason="", readiness=false. Elapsed: 56.325438886s Oct 13 09:50:07.087: INFO: Pod "connectivity-test": Phase="Failed", Reason="", readiness=false. Elapsed: 58.34496563s Oct 13 09:50:07.140: INFO: pod e2e-nettest-7251/connectivity-test logs: nc: connect to 8.8.8.8 port 53 (tcp) timed out: Operation in progress [AfterEach] [sig-network] Networking k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "e2e-nettest-7251". STEP: Found 5 events. Oct 13 09:50:07.147: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for connectivity-test: { } Scheduled: Successfully assigned e2e-nettest-7251/connectivity-test to ostest-n5rnf-worker-0-94fxs Oct 13 09:50:07.147: INFO: At 2022-10-13 09:49:33 +0000 UTC - event for connectivity-test: {multus } AddedInterface: Add eth0 [10.128.174.211/23] from kuryr Oct 13 09:50:07.147: INFO: At 2022-10-13 09:49:33 +0000 UTC - event for connectivity-test: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine Oct 13 09:50:07.147: INFO: At 2022-10-13 09:49:33 +0000 UTC - event for connectivity-test: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container agnhost-container Oct 13 09:50:07.147: INFO: At 2022-10-13 09:49:33 +0000 UTC - event for connectivity-test: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container agnhost-container Oct 13 09:50:07.152: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 09:50:07.152: INFO: connectivity-test ostest-n5rnf-worker-0-94fxs Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:49:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:50:04 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:50:04 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:49:08 +0000 UTC }] Oct 13 09:50:07.152: INFO: Oct 13 09:50:07.173: INFO: skipping dumping cluster info - cluster too large STEP: Destroying namespace "e2e-nettest-7251" for this suite. fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/networking.go:85]: Unexpected error: <*errors.errorString | 0xc002a0ba70>: { s: "pod \"connectivity-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.196.2.169 PodIP:10.128.174.211 PodIPs:[{IP:10.128.174.211}] StartTime:2022-10-13 09:49:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-10-13 09:49:33 +0000 UTC,FinishedAt:2022-10-13 09:50:03 +0000 UTC,ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920 Started:0xc002396fd5}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } pod "connectivity-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:50:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-10-13 09:49:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.196.2.169 PodIP:10.128.174.211 PodIPs:[{IP:10.128.174.211}] StartTime:2022-10-13 09:49:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-10-13 09:49:33 +0000 UTC,FinishedAt:2022-10-13 09:50:03 +0000 UTC,ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:cri-o://00bed2d84a681685762760df683092f0e8ca47470bba429b9e0e73a9f72c5920 Started:0xc002396fd5}] QOSClass:BestEffort EphemeralContainerStatuses:[]} occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:49:01.834: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:50.750: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:50.429: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:50.062: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:49.713: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:48:49.080: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:48:49.309187 9878 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:48:49.309: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:48:49.313: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-3734" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:48.492: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:48.087: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:47.706: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:46.943: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:30.698: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:30.364: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:27.734: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:48:27.772: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:48:27.948574 8687 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:48:27.948: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing directories when readOnly specified in the volumeSource [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:395 Oct 13 09:48:27.957: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Oct 13 09:48:27.992: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-e2e-provisioning-5038" in namespace "e2e-provisioning-5038" to be "Succeeded or Failed" Oct 13 09:48:27.999: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 7.065236ms Oct 13 09:48:30.014: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021224674s Oct 13 09:48:32.021: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028396261s Oct 13 09:48:34.034: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04177016s Oct 13 09:48:36.041: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048702619s Oct 13 09:48:38.047: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054787811s Oct 13 09:48:40.055: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063010201s Oct 13 09:48:42.060: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 14.068043533s Oct 13 09:48:44.064: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 16.071757724s Oct 13 09:48:46.072: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 18.079611956s Oct 13 09:48:48.079: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 20.086499453s Oct 13 09:48:50.087: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 22.094405184s Oct 13 09:48:52.098: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 24.105475345s Oct 13 09:48:54.108: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 26.115741658s Oct 13 09:48:56.117: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 28.124893913s Oct 13 09:48:58.127: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 30.134633358s Oct 13 09:49:00.137: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 32.144367474s Oct 13 09:49:02.141: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 34.148473412s Oct 13 09:49:04.150: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 36.157241569s Oct 13 09:49:06.156: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 38.163996632s Oct 13 09:49:08.169: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 40.176627455s Oct 13 09:49:10.197: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 42.204908262s Oct 13 09:49:12.208: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 44.215467635s Oct 13 09:49:14.218: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.22602386s STEP: Saw pod success Oct 13 09:49:14.218: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038" satisfied condition "Succeeded or Failed" Oct 13 09:49:14.218: INFO: Deleting pod "hostpath-symlink-prep-e2e-provisioning-5038" in namespace "e2e-provisioning-5038" Oct 13 09:49:14.262: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-e2e-provisioning-5038" to be fully deleted Oct 13 09:49:14.275: INFO: Creating resource for inline volume Oct 13 09:49:14.275: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source STEP: Deleting pod Oct 13 09:49:14.276: INFO: Deleting pod "pod-subpath-test-inlinevolume-2tt9" in namespace "e2e-provisioning-5038" Oct 13 09:49:14.349: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-e2e-provisioning-5038" in namespace "e2e-provisioning-5038" to be "Succeeded or Failed" Oct 13 09:49:14.369: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 20.111261ms Oct 13 09:49:16.379: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030590341s Oct 13 09:49:18.400: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051534837s Oct 13 09:49:20.412: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063376148s Oct 13 09:49:22.433: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084303031s Oct 13 09:49:24.443: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09435229s STEP: Saw pod success Oct 13 09:49:24.443: INFO: Pod "hostpath-symlink-prep-e2e-provisioning-5038" satisfied condition "Succeeded or Failed" Oct 13 09:49:24.443: INFO: Deleting pod "hostpath-symlink-prep-e2e-provisioning-5038" in namespace "e2e-provisioning-5038" Oct 13 09:49:24.459: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-e2e-provisioning-5038" to be fully deleted [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-5038" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:27.236: INFO: Driver "nfs" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:26.795: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:08.663: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:04.114: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:03.798: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:03.463: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:03.127: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:02.748: INFO: Driver hostPath doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:48:02.436: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:57.134: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:56.778: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:56.456: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:56.127: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:55.759: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:55.448: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:44.091: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:37.993: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:47:37.420: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:47:37.670632 7043 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:47:37.670: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:47:37.679: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-8444" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:32.484: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:32.163: INFO: Driver nfs doesn't support Block -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:31.836: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:31.445: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:27.237: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:26.154: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:00.724: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:47:00.281: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:59.474: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:59.162: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:58.843: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:55.528: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] vsphere statefulset [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename vsphere-statefulset Oct 13 09:46:54.956: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:46:55.117719 5302 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:46:55.117: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] vsphere statefulset [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_statefulsets.go:63 Oct 13 09:46:55.125: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] vsphere statefulset [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-vsphere-statefulset-7657" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_statefulsets.go:64]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:44.366: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:40.863: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:40.506: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:40.122: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:39.746: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:16.714: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:16.389: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:16.064: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:15.740: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:15.379: INFO: Driver "cinder" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:14.967: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:10.709: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:46:10.394: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:49.799: INFO: Driver nfs doesn't support ext3 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:49.514: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:49.106: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:48.667: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:48.286: INFO: Driver nfs doesn't support Block -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:106]: Driver nfs doesn't support Block -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:42.950: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:42.608: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:45:41.811: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:45:42.102646 2512 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:45:42.102: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:180 Oct 13 09:45:42.109: INFO: Driver "csi-hostpath" does not define supported mount option - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-4538" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "csi-hostpath" does not define supported mount option - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:45:41.027: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:45:41.209952 2500 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:45:41.210: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:201 Oct 13 09:45:41.218: INFO: Driver "cinder" does not support populate data from snapshot - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-9489" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "cinder" does not support populate data from snapshot - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:25.988: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:03.492: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:45:03.088: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:57.017: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:51.803: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:51.470: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:40.639: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:40.277: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:39.929: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:39.645: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:43:39.045: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:43:39.295407 1046284 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:43:39.295: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:43:39.307: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-5119" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:34.507: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:34.158: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:33.780: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:33.496: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:33.190: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:32.859: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:32.551: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:31.227: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:08.129: INFO: Driver "cinder" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:07.758: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:07.406: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:06.984: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:06.573: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:06.218: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-node] AppArmor k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename apparmor Oct 13 09:43:03.761: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:43:03.990282 1044831 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:43:03.990: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles k8s.io/kubernetes@v1.22.1/test/e2e/node/apparmor.go:32 Oct 13 09:43:04.002: INFO: Only supported for node OS distro [gci ubuntu] (not custom) [AfterEach] load AppArmor profiles k8s.io/kubernetes@v1.22.1/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-apparmor-900" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/framework/skipper/skipper.go:291]: Only supported for node OS distro [gci ubuntu] (not custom)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumemode Oct 13 09:43:03.011: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:43:03.185013 1044818 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:43:03.185: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352 Oct 13 09:43:03.191: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumemode-6606" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:43:01.921: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pv Oct 13 09:42:53.267: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:42:53.441407 1044402 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:42:53.441: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63 Oct 13 09:42:53.445: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-pv-9480" for this suite. [AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:112 Oct 13 09:42:53.463: INFO: AfterEach: Cleaning up test resources skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:47.752: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:47.363: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:46.907: INFO: Driver cinder doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:46.576: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:46.194: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:45.805: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49 Oct 13 09:42:45.430: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:45.042: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49 Oct 13 09:42:35.852: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:35.529: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:35.226: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:31.313: INFO: Driver "csi-hostpath" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:22.330: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:21.957: INFO: Driver nfs doesn't support ext3 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:18.900: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:18.524: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:42:00.009: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:59.642: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:59.334: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:58.966: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-api-machinery] API priority and fairness k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename apf Oct 13 09:41:58.390: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:41:58.602352 1042481 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:41:58.602: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that requests can't be drowned out (priority) [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:98 [AfterEach] [sig-api-machinery] API priority and fairness k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-apf-7242" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:100]: skipping test until flakiness is resolved
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:57.840: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pv Oct 13 09:41:57.196: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:41:57.482584 1041865 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:41:57.483: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:77 Oct 13 09:41:57.493: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-pv-3949" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:111 Oct 13 09:41:57.505: INFO: AfterEach: Cleaning up test resources Oct 13 09:41:57.505: INFO: pvc is nil Oct 13 09:41:57.505: INFO: pv is nil skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:56.309: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:56.279: INFO: Driver cinder doesn't publish storage capacity -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/capacity.go:78]: Driver cinder doesn't publish storage capacity -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:55.961: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:55.912: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:55.499: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:54.032: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:53.676: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:48.616: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-provision Oct 13 09:41:19.991: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:41:20.146030 1041080 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:41:20.146: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:52 Oct 13 09:41:20.149: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-provision-814" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:17.119: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:16.701: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49 Oct 13 09:41:16.314: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:15.879: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:12.767: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:12.464: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:41:11.848: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:41:12.067360 1040597 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:41:12.067: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:41:12.071: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-1147" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:11.255: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:10.904: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename vcp-stress Oct 13 09:41:10.257: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:41:10.566408 1040556 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:41:10.566: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_stress.go:60 Oct 13 09:41:10.570: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-vcp-stress-2848" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_stress.go:61]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:09.549: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:09.225: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:03.357: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:03.000: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:02.649: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:02.203: INFO: Driver emptydir doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:00.946: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:41:00.629: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:59.494: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:58.105: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:57.750: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:49.905: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:49.549: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:26.219: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:25.869: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:25.536: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-auth] Metadata Concealment k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename metadata-concealment Oct 13 09:40:25.003: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:40:25.184032 1038423 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:40:25.184: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should run a check-metadata-concealment job to completion [Skipped:gce] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/auth/metadata_concealment.go:34 Oct 13 09:40:25.187: INFO: Only supported for providers [gce] (not openstack) [AfterEach] [sig-auth] Metadata Concealment k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-metadata-concealment-1153" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/auth/metadata_concealment.go:35]: Only supported for providers [gce] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:24.521: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:24.212: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:23.836: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:40:23.299: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:40:23.528084 1038370 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:40:23.528: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:40:23.533: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-4254" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:22.761: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:18.199: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:17.871: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:17.517: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:17.181: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:16.436: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:16.104: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:15.782: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:15.458: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:14.163: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:13.865: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:13.533: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:13.217: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:12.892: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:11.184: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:10.834: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:09.332: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:40:08.790: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:40:09.028353 1037785 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:40:09.029: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:40:09.034: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-9705" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:08.053: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:40:07.641: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Oct 13 09:40:00.543: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:40:00.957352 1037211 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:40:00.957: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 13 09:40:03.101: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb && mount --bind /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb] Namespace:e2e-persistent-local-volumes-test-8619 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-jlx5b ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Creating local PVCs and PVs Oct 13 09:40:03.274: INFO: Creating a PV followed by a PVC Oct 13 09:40:03.293: INFO: Waiting for PV local-pv9zn2j to bind to PVC pvc-h7f9z Oct 13 09:40:03.293: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-h7f9z] to have phase Bound Oct 13 09:40:03.298: INFO: PersistentVolumeClaim pvc-h7f9z found but phase is Pending instead of Bound. Oct 13 09:40:05.306: INFO: PersistentVolumeClaim pvc-h7f9z found but phase is Pending instead of Bound. Oct 13 09:40:07.313: INFO: PersistentVolumeClaim pvc-h7f9z found and phase=Bound (4.02005597s) Oct 13 09:40:07.313: INFO: Waiting up to 3m0s for PersistentVolume local-pv9zn2j to have phase Bound Oct 13 09:40:07.320: INFO: PersistentVolume local-pv9zn2j found and phase=Bound (7.39606ms) [BeforeEach] Set fsGroup for local volume k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286 Oct 13 09:40:07.331: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 13 09:40:07.332: INFO: Deleting PersistentVolumeClaim "pvc-h7f9z" Oct 13 09:40:07.350: INFO: Deleting PersistentVolume "local-pv9zn2j" STEP: Removing the test directory Oct 13 09:40:07.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb && rm -r /tmp/local-volume-test-6f1ab880-c466-43da-8bd4-a155658ce1eb] Namespace:e2e-persistent-local-volumes-test-8619 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-jlx5b ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} [AfterEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistent-local-volumes-test-8619" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:38.882: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:38.499: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:29.564: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:29.233: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:26.764: INFO: Driver cinder doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:26.375: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:19.569: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:01.572: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:01.251: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:39:00.122: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:59.780: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:12.438: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:11.940: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:10.677: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:10.320: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:09.992: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:09.647: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:09.331: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:09.024: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:08.696: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:08.360: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:07.947: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:01.447: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:01.035: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:00.596: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:38:00.206: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:58.831: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:54.642: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:54.619: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:54.212: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:53.807: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:52.607: INFO: Driver nfs doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:52.191: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Oct 13 09:37:49.493: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:37:49.731443 1032321 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:37:49.731: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 13 09:37:51.820: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend && mount --bind /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend && ln -s /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc] Namespace:e2e-persistent-local-volumes-test-2337 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-8jkpv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Creating local PVCs and PVs Oct 13 09:37:51.974: INFO: Creating a PV followed by a PVC Oct 13 09:37:52.013: INFO: Waiting for PV local-pvzhlcx to bind to PVC pvc-xgw9b Oct 13 09:37:52.013: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xgw9b] to have phase Bound Oct 13 09:37:52.017: INFO: PersistentVolumeClaim pvc-xgw9b found but phase is Pending instead of Bound. Oct 13 09:37:54.027: INFO: PersistentVolumeClaim pvc-xgw9b found and phase=Bound (2.01395801s) Oct 13 09:37:54.027: INFO: Waiting up to 3m0s for PersistentVolume local-pvzhlcx to have phase Bound Oct 13 09:37:54.031: INFO: PersistentVolume local-pvzhlcx found and phase=Bound (3.930299ms) [BeforeEach] Set fsGroup for local volume k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286 Oct 13 09:37:54.039: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 13 09:37:54.039: INFO: Deleting PersistentVolumeClaim "pvc-xgw9b" Oct 13 09:37:54.051: INFO: Deleting PersistentVolume "local-pvzhlcx" STEP: Removing the test directory Oct 13 09:37:54.064: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc && umount /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend && rm -r /tmp/local-volume-test-b1c31278-558f-4924-941f-6043b390a9cc-backend] Namespace:e2e-persistent-local-volumes-test-2337 PodName:hostexec-ostest-n5rnf-worker-0-j4pkp-8jkpv ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} [AfterEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistent-local-volumes-test-2337" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:LabelSelector] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pvclabelselector Oct 13 09:37:48.652: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:37:48.904219 1032308 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:37:48.904: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:LabelSelector] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pvc_label_selector.go:64 Oct 13 09:37:48.910: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:LabelSelector] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-pvclabelselector-1693" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pvc_label_selector.go:65]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:48.129: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:47.777: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:47.412: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Oct 13 09:37:43.064: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:37:43.257117 1032128 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:37:43.257: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 13 09:37:45.337: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b7b8c4d8-0a14-4c44-b0bd-57517ab47b7e] Namespace:e2e-persistent-local-volumes-test-3870 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-8qtdt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} STEP: Creating local PVCs and PVs Oct 13 09:37:45.467: INFO: Creating a PV followed by a PVC Oct 13 09:37:45.488: INFO: Waiting for PV local-pvwk9hf to bind to PVC pvc-r7pjx Oct 13 09:37:45.488: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-r7pjx] to have phase Bound Oct 13 09:37:45.499: INFO: PersistentVolumeClaim pvc-r7pjx found but phase is Pending instead of Bound. Oct 13 09:37:47.504: INFO: PersistentVolumeClaim pvc-r7pjx found but phase is Pending instead of Bound. Oct 13 09:37:49.516: INFO: PersistentVolumeClaim pvc-r7pjx found but phase is Pending instead of Bound. Oct 13 09:37:51.522: INFO: PersistentVolumeClaim pvc-r7pjx found and phase=Bound (6.03387078s) Oct 13 09:37:51.522: INFO: Waiting up to 3m0s for PersistentVolume local-pvwk9hf to have phase Bound Oct 13 09:37:51.528: INFO: PersistentVolume local-pvwk9hf found and phase=Bound (6.224597ms) [BeforeEach] Set fsGroup for local volume k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:286 Oct 13 09:37:51.536: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 13 09:37:51.536: INFO: Deleting PersistentVolumeClaim "pvc-r7pjx" Oct 13 09:37:51.551: INFO: Deleting PersistentVolume "local-pvwk9hf" STEP: Removing the test directory Oct 13 09:37:51.582: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b7b8c4d8-0a14-4c44-b0bd-57517ab47b7e] Namespace:e2e-persistent-local-volumes-test-3870 PodName:hostexec-ostest-n5rnf-worker-0-94fxs-8qtdt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} [AfterEach] [sig-storage] PersistentVolumes-local k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistent-local-volumes-test-3870" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-local.go:287]: Disabled temporarily, reopen after #73168 is fixed
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:42.567: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:33.360: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:25.345: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:24.973: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:18.918: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:18.544: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:13.878: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:13.547: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:13.227: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:12.856: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:12.481: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:37:05.741: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:37:05.927847 1030432 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:37:05.927: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:37:05.934: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-8344" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:37:05.229: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:24.457: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:24.071: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:23.647: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:23.247: INFO: Driver cinder doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:22.822: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:22.422: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:22.016: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:21.670: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:21.293: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:20.978: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:17.875: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:36:17.551: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:55.077: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:46.265: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:45.930: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:22.598: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:22.201: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:20.490: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:19.065: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:18.747: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:18.653: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:18.327: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:18.288: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:17.943: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:17.861: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:17.525: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:17.180: INFO: Driver "nfs" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:16.843: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:14.421: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:13.029: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:35:07.056: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:54.780: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:53.122: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:52.758: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:32.135: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:31.821: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:31.345: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:30.929: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:30.553: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:30.239: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:29.891: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:23.459: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:23.102: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:22.741: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:34:17.509: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:44.576: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:44.135: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:43.698: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:29.506: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:29.018: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:28.589: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:28.127: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:27.681: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:27.266: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:15.872: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:15.471: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:15.130: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:14.776: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:14.394: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:13.989: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:13.594: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:13.204: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-placement Oct 13 09:33:12.346: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:33:12.829517 1021734 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:33:12.829: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55 Oct 13 09:33:12.839: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-placement-2538" for this suite. [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistentvolumereclaim Oct 13 09:33:11.637: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:33:11.895798 1021722 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:33:11.895: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:47 [BeforeEach] persistentvolumereclaim:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:54 Oct 13 09:33:11.918: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] persistentvolumereclaim:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:63 STEP: running testCleanupVSpherePersistentVolumeReclaim [AfterEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistentvolumereclaim-3487" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:10.989: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:10.660: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:05.719: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:33:05.299: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:58.926: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:41.540: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:41.197: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:40.849: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:40.485: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:40.157: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:39.729: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:36.909: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:36.474: INFO: Driver hostPath doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49 Oct 13 09:32:36.058: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:20.886: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:32:01.645: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:52.729: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:52.405: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
Oct 13 09:39:54.415: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:39:54.415: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:40:14.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:40:24.146: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:40:24.146: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:40:24.146: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:40:24.146: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:40:24.444: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:40:24.444: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:40:34.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:40:54.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:40:54.157: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:40:54.157: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:40:54.157: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:40:54.157: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:40:54.476: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:40:54.477: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:41:14.027: INFO: waiting for 2 replicas (current: 1) Oct 13 09:41:24.169: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:41:24.169: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:41:24.169: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:41:24.169: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:41:24.519: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:41:24.519: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:41:34.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:41:54.030: INFO: waiting for 2 replicas (current: 1) Oct 13 09:41:54.179: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:41:54.179: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:41:54.180: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:41:54.180: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:41:54.552: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:41:54.553: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:42:14.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:42:24.187: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:42:24.187: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:42:24.187: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:42:24.187: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:42:24.604: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:42:24.605: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:42:34.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:42:54.036: INFO: waiting for 2 replicas (current: 1) Oct 13 09:42:54.196: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:42:54.196: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:42:54.196: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:42:54.196: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:42:54.642: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:42:54.643: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:43:14.026: INFO: waiting for 2 replicas (current: 1) Oct 13 09:43:24.223: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:43:24.223: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:43:24.223: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:43:24.223: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:43:24.707: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:43:24.707: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:43:34.027: INFO: waiting for 2 replicas (current: 1) Oct 13 09:43:54.028: INFO: waiting for 2 replicas (current: 1) Oct 13 09:43:54.230: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:43:54.230: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:43:54.230: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:43:54.230: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:43:54.781: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:43:54.781: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:44:14.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:44:24.239: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:44:24.239: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:44:24.239: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:44:24.239: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:44:24.820: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:44:24.820: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:44:34.028: INFO: waiting for 2 replicas (current: 1) Oct 13 09:44:54.033: INFO: waiting for 2 replicas (current: 1) Oct 13 09:44:54.250: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:44:54.250: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:44:54.251: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:44:54.251: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:44:54.874: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:44:54.874: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:45:14.023: INFO: waiting for 2 replicas (current: 1) Oct 13 09:45:24.266: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:45:24.266: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:45:24.266: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:45:24.266: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:45:24.901: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:45:24.902: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:45:34.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:45:54.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:45:54.279: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:45:54.279: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:45:54.279: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:45:54.280: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:45:54.929: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:45:54.929: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:46:14.029: INFO: waiting for 2 replicas (current: 1) Oct 13 09:46:24.294: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:46:24.294: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:46:24.305: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:46:24.305: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:46:24.974: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:46:24.974: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:46:34.024: INFO: waiting for 2 replicas (current: 1)
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename horizontal-pod-autoscaling Oct 13 09:31:48.594: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:31:48.787738 1018869 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:31:48.787: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] Should scale from 1 pod to 2 pods [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/horizontal_pod_autoscaling.go:69 STEP: Running consuming RC rc-light via /v1, Kind=ReplicationController with 1 replicas STEP: creating replication controller rc-light in namespace e2e-horizontal-pod-autoscaling-8934 I1013 09:31:48.835690 1018869 runners.go:190] Created replication controller with name: rc-light, namespace: e2e-horizontal-pod-autoscaling-8934, replica count: 1 I1013 09:31:58.889251 1018869 runners.go:190] rc-light Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1013 09:32:08.889587 1018869 runners.go:190] rc-light Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1013 09:32:18.890733 1018869 runners.go:190] rc-light Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1013 09:32:28.891723 1018869 runners.go:190] rc-light Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: Running controller STEP: creating replication controller rc-light-ctrl in namespace e2e-horizontal-pod-autoscaling-8934 I1013 09:32:28.937551 1018869 runners.go:190] Created replication controller with name: rc-light-ctrl, namespace: e2e-horizontal-pod-autoscaling-8934, replica count: 1 I1013 09:32:38.988064 1018869 runners.go:190] rc-light-ctrl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1013 09:32:48.989163 1018869 runners.go:190] rc-light-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 13 09:32:53.989: INFO: Waiting for amount of service:rc-light-ctrl endpoints to be 1 Oct 13 09:32:53.993: INFO: RC rc-light: consume 150 millicores in total Oct 13 09:32:53.993: INFO: RC rc-light: sending request to consume 0 millicores Oct 13 09:32:53.993: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=0&requestSizeMillicores=100 } Oct 13 09:32:54.002: INFO: RC rc-light: setting consumption to 150 millicores in total Oct 13 09:32:54.002: INFO: RC rc-light: consume 0 MB in total Oct 13 09:32:54.002: INFO: RC rc-light: setting consumption to 0 MB in total Oct 13 09:32:54.002: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:32:54.002: INFO: RC rc-light: consume custom metric 0 in total Oct 13 09:32:54.002: INFO: RC rc-light: setting bump of metric QPS to 0 in total Oct 13 09:32:54.002: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:32:54.002: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:32:54.002: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:32:54.018: INFO: waiting for 2 replicas (current: 1) Oct 13 09:33:14.053: INFO: waiting for 2 replicas (current: 1) Oct 13 09:33:24.003: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:33:24.003: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:33:24.015: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:33:24.015: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:33:24.016: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:33:24.016: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:33:34.035: INFO: waiting for 2 replicas (current: 1) Oct 13 09:33:54.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:34:14.027: INFO: waiting for 2 replicas (current: 1) Oct 13 09:34:24.018: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:34:24.018: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:34:24.018: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:34:24.018: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:34:24.018: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:34:24.018: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:34:34.030: INFO: waiting for 2 replicas (current: 1) Oct 13 09:34:54.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:35:14.026: INFO: waiting for 2 replicas (current: 1) Oct 13 09:35:24.026: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:35:24.026: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:35:24.026: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:35:24.026: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:35:24.026: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:35:24.026: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:35:34.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:35:54.030: INFO: waiting for 2 replicas (current: 1) Oct 13 09:35:54.043: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:35:54.043: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:35:54.043: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:35:54.043: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:35:54.071: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:35:54.071: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:36:14.026: INFO: waiting for 2 replicas (current: 1) Oct 13 09:36:24.059: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:36:24.059: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:36:24.059: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:36:24.059: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:36:24.109: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:36:24.109: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:36:34.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:36:54.027: INFO: waiting for 2 replicas (current: 1) Oct 13 09:36:54.074: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:36:54.074: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:36:54.075: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:36:54.075: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:36:54.148: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:36:54.148: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:37:14.022: INFO: waiting for 2 replicas (current: 1) Oct 13 09:37:24.083: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:37:24.083: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:37:24.083: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:37:24.083: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:37:24.199: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:37:24.199: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:37:34.027: INFO: waiting for 2 replicas (current: 1) Oct 13 09:37:54.028: INFO: waiting for 2 replicas (current: 1) Oct 13 09:37:54.092: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:37:54.092: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:37:54.092: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:37:54.092: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:37:54.236: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:37:54.236: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:38:14.022: INFO: waiting for 2 replicas (current: 1) Oct 13 09:38:24.100: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:38:24.100: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:38:24.110: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:38:24.110: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:38:24.280: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:38:24.280: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:38:34.023: INFO: waiting for 2 replicas (current: 1) Oct 13 09:38:54.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:38:54.111: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:38:54.111: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:38:54.118: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:38:54.118: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:38:54.313: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:38:54.314: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:39:14.029: INFO: waiting for 2 replicas (current: 1) Oct 13 09:39:24.120: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:39:24.120: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:39:24.126: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:39:24.127: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:39:24.357: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:39:24.357: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:39:34.023: INFO: waiting for 2 replicas (current: 1) Oct 13 09:39:54.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:39:54.130: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:39:54.130: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:39:54.137: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:39:54.137: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:39:54.415: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:39:54.415: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:40:14.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:40:24.146: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:40:24.146: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:40:24.146: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:40:24.146: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:40:24.444: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:40:24.444: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:40:34.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:40:54.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:40:54.157: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:40:54.157: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:40:54.157: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:40:54.157: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:40:54.476: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:40:54.477: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:41:14.027: INFO: waiting for 2 replicas (current: 1) Oct 13 09:41:24.169: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:41:24.169: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:41:24.169: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:41:24.169: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:41:24.519: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:41:24.519: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:41:34.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:41:54.030: INFO: waiting for 2 replicas (current: 1) Oct 13 09:41:54.179: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:41:54.179: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:41:54.180: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:41:54.180: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:41:54.552: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:41:54.553: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:42:14.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:42:24.187: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:42:24.187: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:42:24.187: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:42:24.187: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:42:24.604: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:42:24.605: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:42:34.025: INFO: waiting for 2 replicas (current: 1) Oct 13 09:42:54.036: INFO: waiting for 2 replicas (current: 1) Oct 13 09:42:54.196: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:42:54.196: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:42:54.196: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:42:54.196: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:42:54.642: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:42:54.643: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:43:14.026: INFO: waiting for 2 replicas (current: 1) Oct 13 09:43:24.223: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:43:24.223: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:43:24.223: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:43:24.223: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:43:24.707: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:43:24.707: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:43:34.027: INFO: waiting for 2 replicas (current: 1) Oct 13 09:43:54.028: INFO: waiting for 2 replicas (current: 1) Oct 13 09:43:54.230: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:43:54.230: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:43:54.230: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:43:54.230: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:43:54.781: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:43:54.781: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:44:14.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:44:24.239: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:44:24.239: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:44:24.239: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:44:24.239: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:44:24.820: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:44:24.820: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:44:34.028: INFO: waiting for 2 replicas (current: 1) Oct 13 09:44:54.033: INFO: waiting for 2 replicas (current: 1) Oct 13 09:44:54.250: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:44:54.250: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:44:54.251: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:44:54.251: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:44:54.874: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:44:54.874: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:45:14.023: INFO: waiting for 2 replicas (current: 1) Oct 13 09:45:24.266: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:45:24.266: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:45:24.266: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:45:24.266: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:45:24.901: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:45:24.902: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:45:34.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:45:54.024: INFO: waiting for 2 replicas (current: 1) Oct 13 09:45:54.279: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:45:54.279: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:45:54.279: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:45:54.280: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:45:54.929: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:45:54.929: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:46:14.029: INFO: waiting for 2 replicas (current: 1) Oct 13 09:46:24.294: INFO: RC rc-light: sending request to consume 0 of custom metric QPS Oct 13 09:46:24.294: INFO: ConsumeCustomMetric URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/BumpMetric false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Oct 13 09:46:24.305: INFO: RC rc-light: sending request to consume 0 MB Oct 13 09:46:24.305: INFO: ConsumeMem URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeMem false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Oct 13 09:46:24.974: INFO: RC rc-light: sending request to consume 150 millicores Oct 13 09:46:24.974: INFO: ConsumeCPU URL: {https api.ostest.shiftstack.com:6443 /api/v1/namespaces/e2e-horizontal-pod-autoscaling-8934/services/rc-light-ctrl/proxy/ConsumeCPU false durationSec=30&millicores=150&requestSizeMillicores=100 } Oct 13 09:46:34.024: INFO: waiting for 2 replicas (current: 1)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:48.004: INFO: Driver "nfs" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:47.646: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:47.333: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:47.034: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:46.657: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:46.216: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:23.282: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:22.457: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:22.110: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:17.185: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:16.832: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:16.262: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:15.896: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:15.765: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:31:09.833: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:31:10.027229 1016999 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:31:10.027: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:31:10.031: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-4984" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:09.205: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:31:08.807: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:51.044: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:50.709: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:30:50.164: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:30:50.362282 1016183 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:30:50.362: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with pvc data source [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:239 Oct 13 09:30:50.370: INFO: Driver "cinder" does not support cloning - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-539" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "cinder" does not support cloning - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:46.489: INFO: Driver "nfs" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:46.135: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:38.556: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:38.242: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename topology Oct 13 09:30:37.509: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:30:37.794371 1015947 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:30:37.794: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:192 Oct 13 09:30:37.804: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:nova] Oct 13 09:30:37.805: INFO: In-tree plugin kubernetes.io/cinder is not migrated, not validating any metrics Oct 13 09:30:37.805: INFO: Not enough topologies in cluster -- skipping STEP: Deleting pvc STEP: Deleting sc [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-topology-9468" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:199]: Not enough topologies in cluster -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:36.942: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:36.500: INFO: Driver "nfs" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:36.101: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:26.267: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:25.849: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:25.453: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:25.112: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:24.741: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:24.411: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:24.070: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:23.695: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:23.372: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:22.983: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:30:22.416: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:30:22.625653 1015370 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:30:22.625: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:30:22.631: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-9431" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:21.847: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:49 Oct 13 09:30:21.529: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:50]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:30:18.074: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:50.741: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:50.347: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:46.143: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:45.632: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Mounted volume expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename mounted-volume-expand Oct 13 09:29:44.999: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:29:45.195860 1013393 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:29:45.195: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/mounted_volume_resize.go:61 Oct 13 09:29:45.200: INFO: Only supported for providers [aws gce] (not openstack) [AfterEach] [sig-storage] Mounted volume expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-mounted-volume-expand-5105" for this suite. [AfterEach] [sig-storage] Mounted volume expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/mounted_volume_resize.go:108 Oct 13 09:29:45.227: INFO: AfterEach: Cleaning up resources for mounted volume resize skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/mounted_volume_resize.go:62]: Only supported for providers [aws gce] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:44.336: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:43.945: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:33.606: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:32.052: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:26.410: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:21.388: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:21.075: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:20.657: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pv Oct 13 09:29:19.966: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:29:20.238758 1012349 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:29:20.238: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63 Oct 13 09:29:20.249: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-pv-1138" for this suite. [AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:112 Oct 13 09:29:20.269: INFO: AfterEach: Cleaning up test resources skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumelimits Oct 13 09:29:19.129: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:29:19.385256 1012334 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:29:19.385: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that all csinodes have volume limits [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:238 [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumelimits-5292" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumelimits.go:241]: driver nfs does not support volume limits
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-fstype Oct 13 09:29:16.050: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:29:16.255666 1012075 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:29:16.255: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:75 Oct 13 09:29:16.259: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-fstype-1909" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:15.309: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:15.305: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:14.951: INFO: Driver cinder doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:14.922: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:14.583: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:14.533: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:14.142: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:14.118: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:13.725: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:13.743: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:29:13.370: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:42.366: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:33.748: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:33.326: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:29.403: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:29.050: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:28.657: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:28.254: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:27.829: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:16.216: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:15.792: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:11.487: INFO: Driver hostPath doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:11.091: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:10.720: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:10.439: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:10.136: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:08.586: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] DNS k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename dns Oct 13 09:28:08.064: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:28:08.225021 1009576 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:28:08.225: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] [Skipped:Proxy] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/network/dns.go:68 Oct 13 09:28:08.229: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-network] DNS k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-dns-7419" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/network/dns.go:69]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:07.561: INFO: Driver "csi-hostpath" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:28:07.191: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:59.830: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:59.324: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:58.931: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:56.175: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:55.783: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:55.340: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:54.940: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:54.614: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:54.231: INFO: Driver csi-hostpath doesn't support ext4 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:53.903: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:53.590: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:53.214: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:52.888: INFO: Driver emptydir doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] CSI mock volume k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename csi-mock-volumes Oct 13 09:27:42.390: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:27:42.618651 1008424 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:27:42.618: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1765 STEP: Building a driver namespace object, basename e2e-csi-mock-volumes-4421 Oct 13 09:27:42.806: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 13 09:27:43.035: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-attacher Oct 13 09:27:43.048: INFO: creating *v1.ClusterRole: external-attacher-runner-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.048: INFO: Define cluster role external-attacher-runner-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.060: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.068: INFO: creating *v1.Role: e2e-csi-mock-volumes-4421-6950/external-attacher-cfg-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.074: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-attacher-role-cfg Oct 13 09:27:43.095: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-provisioner Oct 13 09:27:43.108: INFO: creating *v1.ClusterRole: external-provisioner-runner-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.108: INFO: Define cluster role external-provisioner-runner-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.122: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.141: INFO: creating *v1.Role: e2e-csi-mock-volumes-4421-6950/external-provisioner-cfg-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.154: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-provisioner-role-cfg Oct 13 09:27:43.176: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-resizer Oct 13 09:27:43.189: INFO: creating *v1.ClusterRole: external-resizer-runner-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.189: INFO: Define cluster role external-resizer-runner-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.208: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.228: INFO: creating *v1.Role: e2e-csi-mock-volumes-4421-6950/external-resizer-cfg-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.248: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-resizer-role-cfg Oct 13 09:27:43.275: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-snapshotter Oct 13 09:27:43.303: INFO: creating *v1.ClusterRole: external-snapshotter-runner-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.303: INFO: Define cluster role external-snapshotter-runner-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.314: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.331: INFO: creating *v1.Role: e2e-csi-mock-volumes-4421-6950/external-snapshotter-leaderelection-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.342: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/external-snapshotter-leaderelection Oct 13 09:27:43.358: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-mock Oct 13 09:27:43.377: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.399: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.414: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.426: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.450: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.463: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.480: INFO: creating *v1.StorageClass: csi-mock-sc-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.501: INFO: creating *v1.StatefulSet: e2e-csi-mock-volumes-4421-6950/csi-mockplugin Oct 13 09:27:43.520: INFO: creating *v1.CSIDriver: csi-mock-e2e-csi-mock-volumes-4421 Oct 13 09:27:43.530: INFO: creating *v1.StatefulSet: e2e-csi-mock-volumes-4421-6950/csi-mockplugin-snapshotter Oct 13 09:27:43.548: INFO: waiting up to 4m0s for CSIDriver "csi-mock-e2e-csi-mock-volumes-4421" Oct 13 09:27:43.561: INFO: waiting for CSIDriver csi-mock-e2e-csi-mock-volumes-4421 to register on node ostest-n5rnf-worker-0-j4pkp W1013 09:28:25.191663 1008424 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from W1013 09:28:25.191699 1008424 metrics_grabber.go:151] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled. Oct 13 09:28:25.191: INFO: Snapshot controller metrics not found -- skipping STEP: Cleaning up resources STEP: deleting the test namespace: e2e-csi-mock-volumes-4421 STEP: Waiting for namespaces [e2e-csi-mock-volumes-4421] to vanish STEP: uninstalling csi mock driver Oct 13 09:28:57.234: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-attacher Oct 13 09:28:57.256: INFO: deleting *v1.ClusterRole: external-attacher-runner-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.284: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.324: INFO: deleting *v1.Role: e2e-csi-mock-volumes-4421-6950/external-attacher-cfg-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.353: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-attacher-role-cfg Oct 13 09:28:57.378: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-provisioner Oct 13 09:28:57.403: INFO: deleting *v1.ClusterRole: external-provisioner-runner-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.460: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.477: INFO: deleting *v1.Role: e2e-csi-mock-volumes-4421-6950/external-provisioner-cfg-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.501: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-provisioner-role-cfg Oct 13 09:28:57.522: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-resizer Oct 13 09:28:57.538: INFO: deleting *v1.ClusterRole: external-resizer-runner-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.552: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.574: INFO: deleting *v1.Role: e2e-csi-mock-volumes-4421-6950/external-resizer-cfg-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.587: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/csi-resizer-role-cfg Oct 13 09:28:57.601: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-snapshotter Oct 13 09:28:57.611: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.620: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.646: INFO: deleting *v1.Role: e2e-csi-mock-volumes-4421-6950/external-snapshotter-leaderelection-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.666: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-4421-6950/external-snapshotter-leaderelection Oct 13 09:28:57.679: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-4421-6950/csi-mock Oct 13 09:28:57.691: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.711: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.728: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.744: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.762: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.790: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.802: INFO: deleting *v1.StorageClass: csi-mock-sc-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.826: INFO: deleting *v1.StatefulSet: e2e-csi-mock-volumes-4421-6950/csi-mockplugin Oct 13 09:28:57.838: INFO: deleting *v1.CSIDriver: csi-mock-e2e-csi-mock-volumes-4421 Oct 13 09:28:57.852: INFO: deleting *v1.StatefulSet: e2e-csi-mock-volumes-4421-6950/csi-mockplugin-snapshotter STEP: deleting the driver namespace: e2e-csi-mock-volumes-4421-6950 STEP: Waiting for namespaces [e2e-csi-mock-volumes-4421-6950] to vanish [AfterEach] [sig-storage] CSI mock volume k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1786]: Snapshot controller metrics not found -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:41.852: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:41.454: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:35.190: INFO: Driver nfs doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:32.259: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:31.888: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:31.574: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:31.190: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:17.506: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:17.077: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-fstype Oct 13 09:27:16.463: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:27:16.645999 1007185 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:27:16.646: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:75 Oct 13 09:27:16.650: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-fstype-4616" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:15.794: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:15.417: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:15.085: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:14.670: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:07.684: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:07.340: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:27:06.987: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:52.658: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:52.286: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:51.892: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:51.498: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-placement Oct 13 09:26:50.920: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:26:51.111419 1005939 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:26:51.111: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55 Oct 13 09:26:51.114: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-placement-465" for this suite. [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:50.411: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:50.047: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:49.695: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:49.358: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:26:46.357: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:26:46.561481 1005769 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:26:46.561: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:26:46.565: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-7844" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:45.760: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:45.394: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:45.079: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:44.757: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:26:21.918: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:50.997: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:50.560: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:49.623: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:49.310: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:48.998: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:48.667: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:48.275: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:47.939: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:40.426: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:40.044: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:39.633: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:27.446: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:22.157: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:12.260: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:11.882: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:11.879: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:11.506: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:11.125: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:25:00.010: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:58.440: INFO: Driver cinder doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:58.164: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:57.803: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:57.639: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:57.303: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:41.766: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:41.398: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:41.017: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:40.634: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:24:40.005: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:24:40.179124 1001326 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:24:40.179: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:201 Oct 13 09:24:40.186: INFO: Driver "nfs" does not support populate data from snapshot - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-3149" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "nfs" does not support populate data from snapshot - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:39.331: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:38.920: INFO: Driver cinder doesn't support ext3 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:25.491: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:25.000: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:24.870: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:24.612: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:24.433: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:24.100: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:23.753: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:16.457: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:16.059: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:09.301: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:08.922: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:08.589: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:08.207: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:07.733: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:24:07.338: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:49.862: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:23:49.320: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:23:49.477592 999141 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:23:49.477: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:23:49.481: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-2113" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:48.741: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:48.350: INFO: Driver emptydir doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:48.003: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-placement Oct 13 09:23:46.454: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:23:46.774389 999040 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:23:46.774: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55 Oct 13 09:23:46.783: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-placement-2644" for this suite. [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:45.761: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:45.337: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:44.989: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:44.650: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:43.372: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:41.142: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:23:40.610: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:23:40.792123 998689 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:23:40.792: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:23:40.796: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-5422" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:23:39.836: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:23:40.002602 998674 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:23:40.002: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:23:40.006: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-3682" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:39.329: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:38.962: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:33.734: INFO: Driver cinder doesn't support ext4 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:33.391: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:33.039: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:32.553: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
fail [k8s.io/kubernetes@v1.22.1/test/e2e/common/node/pods.go:884]: found a pod(s) Unexpected error: <*errors.errorString | 0xc0002fcad0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-node] Pods k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pods Oct 13 09:23:27.398: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:23:27.580386 998258 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:23:27.580: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods k8s.io/kubernetes@v1.22.1/test/e2e/common/node/pods.go:188 [It] should delete a collection of pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:630 STEP: Create set of pods Oct 13 09:23:27.610: INFO: created test-pod-1 Oct 13 09:23:27.641: INFO: created test-pod-2 Oct 13 09:23:27.674: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted Oct 13 09:23:27.796: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:28.807: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:29.803: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:30.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:31.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:32.812: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:33.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:34.811: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:35.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:36.803: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:37.807: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:38.806: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:39.805: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:40.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:41.807: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:42.809: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:43.816: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:44.811: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:45.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:46.807: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:47.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:48.807: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:49.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:50.809: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:51.801: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:52.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:53.809: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:54.803: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:55.805: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:56.806: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:57.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:58.803: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:23:59.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:00.813: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:01.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:02.814: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:03.813: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:04.807: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:05.806: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:06.827: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:07.807: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:08.816: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:09.813: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:10.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:11.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:12.805: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:13.803: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:14.806: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:15.805: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:16.803: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:17.803: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:18.801: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:19.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:20.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:21.807: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:22.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:23.809: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:24.802: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:25.821: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:26.804: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:27.801: INFO: Pod quantity 3 is different from expected quantity 0 Oct 13 09:24:27.809: INFO: Pod quantity 3 is different from expected quantity 0 [AfterEach] [sig-node] Pods k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "e2e-pods-666". STEP: Found 3 events. Oct 13 09:24:27.813: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-pod-1: { } Scheduled: Successfully assigned e2e-pods-666/test-pod-1 to ostest-n5rnf-worker-0-94fxs Oct 13 09:24:27.813: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-pod-2: { } Scheduled: Successfully assigned e2e-pods-666/test-pod-2 to ostest-n5rnf-worker-0-94fxs Oct 13 09:24:27.813: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-pod-3: { } Scheduled: Successfully assigned e2e-pods-666/test-pod-3 to ostest-n5rnf-worker-0-j4pkp Oct 13 09:24:27.817: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 09:24:27.817: INFO: test-pod-1 ostest-n5rnf-worker-0-94fxs Pending 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC }] Oct 13 09:24:27.817: INFO: test-pod-2 ostest-n5rnf-worker-0-94fxs Pending 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC }] Oct 13 09:24:27.817: INFO: test-pod-3 ostest-n5rnf-worker-0-j4pkp Pending 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:23:27 +0000 UTC }] Oct 13 09:24:27.817: INFO: Oct 13 09:24:27.823: INFO: skipping dumping cluster info - cluster too large STEP: Destroying namespace "e2e-pods-666" for this suite. fail [k8s.io/kubernetes@v1.22.1/test/e2e/common/node/pods.go:884]: found a pod(s) Unexpected error: <*errors.errorString | 0xc0002fcad0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] vcp at scale [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename vcp-at-scale Oct 13 09:23:26.603: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:23:26.839870 998244 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:23:26.839: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] vcp at scale [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_scale.go:75 Oct 13 09:23:26.845: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] vcp at scale [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-vcp-at-scale-635" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_scale.go:76]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:23:25.923: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:55.457: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:55.027: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:54.191: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:53.800: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:48.549: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:48.151: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:47.733: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:47.311: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:46.846: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:45.296: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:37.595: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:37.274: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:36.939: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:33.269: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:32.917: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:32.542: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:21.489: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:21.183: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:20.844: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:20.480: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:20.484: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:11.437: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:11.122: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:10.758: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:10.445: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:10.098: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:22:09.591: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:22:09.782569 994874 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:22:09.782: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:22:09.786: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-8117" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:07.340: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:07.014: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:06.710: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:06.357: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:06.056: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:05.755: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:05.375: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:05.045: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:22:04.680: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:47.307: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:41.746: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:41.378: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:41.024: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:36.242: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:35.889: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:28.110: INFO: Driver "nfs" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:27.741: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:27.406: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:13.263: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:12.933: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:12.611: INFO: Driver "nfs" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:21:12.199: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:56.789: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:56.440: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:56.083: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:55.605: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-apps] ReplicationController k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename replication-controller Oct 13 09:20:54.874: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:20:55.110538 991531 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:20:55.110: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController k8s.io/kubernetes@v1.22.1/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a private image [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/apps/rc.go:68 Oct 13 09:20:55.127: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-apps] ReplicationController k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-replication-controller-107" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/rc.go:70]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:54.372: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:46.807: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:44.541: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:43.917: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:43.597: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:27.757: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:27.342: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:20:26.953: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:20:22.984: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:20:23.142097 989916 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:20:23.142: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:20:23.150: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-5681" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:44.099: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:43.734: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:43.376: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumemode Oct 13 09:19:42.827: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:19:43.005552 988414 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:19:43.005: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352 Oct 13 09:19:43.010: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumemode-650" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:42.262: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:41.916: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:40.954: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:40.609: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:40.246: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:36.875: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:19:27.954: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:52.899: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:52.471: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:46.020: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:45.600: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:37.511: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:37.147: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:36.792: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:36.428: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:36.042: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:35.714: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:35.272: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-provision Oct 13 09:18:34.585: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:18:34.793410 985624 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:18:34.793: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:52 Oct 13 09:18:34.799: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-provision-3709" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_cluster_ds.go:53]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:33.952: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:33.570: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:29.375: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:29.015: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:27.983: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:27.548: INFO: Driver "csi-hostpath" does not support topology - skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:25.851: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Disk Size [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-disksize Oct 13 09:18:25.253: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:18:25.385001 985071 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:18:25.385: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Disk Size [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_disksize.go:55 Oct 13 09:18:25.394: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Disk Size [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-disksize-2206" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_disksize.go:56]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:24.728: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:24.375: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:24.027: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:23.621: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:18:20.966: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:18:21.110932 984981 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:18:21.111: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:201 Oct 13 09:18:21.116: INFO: Driver "cinder" does not support populate data from snapshot - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-4681" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:203]: Driver "cinder" does not support populate data from snapshot - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:20.389: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:20.038: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:17.474: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:18:17.130: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:43.276: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:43.115: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:42.805: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:42.648: INFO: Driver "nfs" does not support topology - skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "nfs" does not support topology - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume Oct 13 09:17:42.140: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:17:42.355699 983542 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:17:42.355: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow exec of files on the volume [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:196 Oct 13 09:17:42.360: INFO: Driver "hostPath" does not support exec - skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-7832" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "hostPath" does not support exec - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:41.440: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:40.582: INFO: Driver nfs doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:40.211: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:39.883: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:39.516: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:39.134: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:38.740: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:38.304: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:37.848: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:36.274: INFO: Driver csi-hostpath doesn't support ext4 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:35.914: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:35.553: INFO: Driver emptydir doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver emptydir doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:25.582: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:25.145: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:24.781: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:20.452: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:20.080: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:19.609: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:19.213: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:15.005: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:13.782: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:13.452: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:13.118: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:05.397: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:04.913: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:04.543: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:03.153: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:02.734: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:02.332: INFO: Driver cinder doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Flexvolumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename flexvolume Oct 13 09:17:01.572: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:17:01.772542 981573 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:17:01.772: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:169 Oct 13 09:17:01.781: INFO: Only supported for providers [gce local] (not openstack) [AfterEach] [sig-storage] Flexvolumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-flexvolume-6980" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/flexvolume.go:170]: Only supported for providers [gce local] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:00.958: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:00.524: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:17:00.094: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:16:59.501: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:16:59.652550 981522 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:16:59.652: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:180 Oct 13 09:16:59.658: INFO: Driver "cinder" does not define supported mount option - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-7040" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "cinder" does not define supported mount option - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:59.000: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumemode Oct 13 09:16:58.369: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:16:58.562736 981496 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:16:58.562: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352 Oct 13 09:16:58.578: INFO: Driver "nfs" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumemode-7951" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:57.829: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:57.413: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:56.940: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:56.476: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:52.125: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:51.756: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:51.435: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:51.073: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:50.766: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:50.439: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:50.092: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:49.735: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:49.315: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:48.887: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:48.427: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:48.088: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:47.710: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumemode Oct 13 09:16:46.222: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:16:46.489589 980559 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:16:46.489: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352 Oct 13 09:16:46.494: INFO: Driver "nfs" does not provide raw block - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumemode-1623" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:45.742: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:45.460: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:45.169: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:45.114: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:44.856: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:44.374: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:37.212: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:36.820: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:26.446: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:26.077: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:16:25.402: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:16:25.572011 980034 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:16:25.572: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:16:25.576: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-9267" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:24.790: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:24.473: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:21.873: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:21.526: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:20.033: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:19.712: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:19.401: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:19.074: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:14.633: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:16:14.121: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:16:14.294983 979540 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:16:14.295: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:16:14.302: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-7263" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:16:13.464: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:16:13.625412 979528 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:16:13.625: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:180 Oct 13 09:16:13.629: INFO: Driver "csi-hostpath" does not define supported mount option - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-6342" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "csi-hostpath" does not define supported mount option - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:12.987: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:12.643: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:09.960: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:16:09.659: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:55.853: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:33.822: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:33.412: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:33.084: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:32.245: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:31.821: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:31.371: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:30.994: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:19.361: INFO: Driver "nfs" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:18.978: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:18.650: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:16.106: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:15.768: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:15.346: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:14.877: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:14.423: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:14.039: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:11.352: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:10.993: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:09.725: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:09.286: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:08.886: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:15:08.497: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:57.925: INFO: Driver "nfs" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "nfs" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:57.457: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:57.062: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:52.523: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:40.653: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:37.570: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:36.114: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:35.790: INFO: Driver hostPath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:23.067: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:09.241: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:08.925: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:08.636: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:07.275: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:06.946: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:06.689: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:14:06.398: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:22.073: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:19.558: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:18.766: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:18.365: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume limits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-limits-on-node Oct 13 09:13:17.737: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:13:17.933304 972993 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:13:17.933: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits k8s.io/kubernetes@v1.22.1/test/e2e/storage/volume_limits.go:35 Oct 13 09:13:17.943: INFO: Only supported for providers [aws gce gke] (not openstack) [AfterEach] [sig-storage] Volume limits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-limits-on-node-4350" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/volume_limits.go:36]: Only supported for providers [aws gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:17.141: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:13:16.614: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:13:16.800841 972969 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:13:16.800: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:13:16.805: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-322" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:04.995: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:04.845: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:04.480: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:13:04.164: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:48.499: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:48.149: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:47.810: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:44.607: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:23.548: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:23.149: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:22.821: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:22.438: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:14.739: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
fail [k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:557]: Oct 13 09:13:55.642: Failed to delete stateful pod ss2-1 for StatefulSet e2e-statefulset-7447/ss2: pods "ss2-1" not found
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-apps] StatefulSet k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename statefulset Oct 13 09:12:14.731: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:12:15.364700 970187 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:12:15.364: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace e2e-statefulset-7447 [It] should implement legacy replacement when the update strategy is OnDelete [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:503 STEP: Creating a new StatefulSet Oct 13 09:12:15.456: INFO: Found 0 stateful pods, waiting for 3 Oct 13 09:12:25.473: INFO: Found 1 stateful pods, waiting for 3 Oct 13 09:12:35.465: INFO: Found 1 stateful pods, waiting for 3 Oct 13 09:12:45.469: INFO: Found 1 stateful pods, waiting for 3 Oct 13 09:12:55.491: INFO: Found 1 stateful pods, waiting for 3 Oct 13 09:13:05.469: INFO: Found 1 stateful pods, waiting for 3 Oct 13 09:13:15.461: INFO: Found 1 stateful pods, waiting for 3 Oct 13 09:13:25.463: INFO: Found 2 stateful pods, waiting for 3 Oct 13 09:13:35.462: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 13 09:13:35.462: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 13 09:13:35.462: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 13 09:13:45.461: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 13 09:13:45.461: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 13 09:13:45.461: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Restoring Pods to the current revision Oct 13 09:13:45.551: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 13 09:13:45.551: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 13 09:13:45.551: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj to quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm Oct 13 09:13:45.599: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Recreating Pods at the new revision Oct 13 09:13:55.642: FAIL: Failed to delete stateful pod ss2-1 for StatefulSet e2e-statefulset-7447/ss2: pods "ss2-1" not found Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func9.2.9() k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:557 +0xdee github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000001a0) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113 +0xba github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0018dce68) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:64 +0x125 github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x7fc73e8b4fff) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/it_node.go:26 +0x7b github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00293b1d0, 0xc0018dd230, {0x83433a0, 0xc000388900}) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:215 +0x2a9 github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00293b1d0, {0x83433a0, 0xc000388900}) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:138 +0xe7 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000e46dc0, 0xc00293b1d0) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:200 +0xe5 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000e46dc0) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:170 +0x1a5 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000e46dc0) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:66 +0xc5 github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000374780, {0x8343660, 0xc0021c4e10}, {0x0, 0x0}, {0xc000a82070, 0x1, 0x1}, {0x843fe58, 0xc000388900}, ...) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/suite/suite.go:62 +0x4b2 github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc001ec1590, {0xc000b372d0, 0xb8fc7b0, 0x457d780}) github.com/openshift/origin/pkg/test/ginkgo/cmd_runtest.go:61 +0x3be main.newRunTestCommand.func1.1() github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x32 github.com/openshift/origin/test/extended/util.WithCleanup(0xc00193fc18) github.com/openshift/origin/test/extended/util/test.go:168 +0xad main.newRunTestCommand.func1(0xc001eea780, {0xc000b372d0, 0x1, 0x1}) github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x38a github.com/spf13/cobra.(*Command).execute(0xc001eea780, {0xc000b372a0, 0x1, 0x1}) github.com/spf13/cobra@v1.1.3/command.go:852 +0x60e github.com/spf13/cobra.(*Command).ExecuteC(0xc001831b80) github.com/spf13/cobra@v1.1.3/command.go:960 +0x3ad github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.1.3/command.go:897 main.main.func1(0xc000531f00) github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:84 +0x8a main.main() github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:85 +0x3b6 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:118 Oct 13 09:13:55.654: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-statefulset-7447 describe po ss2-0' Oct 13 09:13:55.826: INFO: stderr: "" Oct 13 09:13:55.826: INFO: stdout: "Name: ss2-0\nNamespace: e2e-statefulset-7447\nPriority: 0\nNode: ostest-n5rnf-worker-0-8kq82/10.196.2.72\nStart Time: Thu, 13 Oct 2022 09:13:48 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-77bddb779c\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"kuryr\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.128.179.60\"\n ],\n \"mac\": \"fa:16:3e:30:2b:46\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"kuryr\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.128.179.60\"\n ],\n \"mac\": \"fa:16:3e:30:2b:46\",\n \"default\": true,\n \"dns\": {}\n }]\n openshift.io/scc: anyuid\nStatus: Terminating (lasts 0s)\nTermination Grace Period: 0s\nIP: \nIPs: <none>\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: \n Image: quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm\n Image ID: \n Port: <none>\n Host Port: <none>\n State: Waiting\n Reason: ContainerCreating\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwr4k (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-rwr4k:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: <nil>\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned e2e-statefulset-7447/ss2-0 to ostest-n5rnf-worker-0-8kq82\n Normal AddedInterface 3s multus Add eth0 [10.128.179.60/23] from kuryr\n Normal Pulling 3s kubelet Pulling image \"quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm\"\n" Oct 13 09:13:55.826: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: e2e-statefulset-7447 Priority: 0 Node: ostest-n5rnf-worker-0-8kq82/10.196.2.72 Start Time: Thu, 13 Oct 2022 09:13:48 +0000 Labels: baz=blah controller-revision-hash=ss2-77bddb779c foo=bar statefulset.kubernetes.io/pod-name=ss2-0 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.179.60" ], "mac": "fa:16:3e:30:2b:46", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.179.60" ], "mac": "fa:16:3e:30:2b:46", "default": true, "dns": {} }] openshift.io/scc: anyuid Status: Terminating (lasts 0s) Termination Grace Period: 0s IP: IPs: <none> Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: Image: quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwr4k (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-rwr4k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7s default-scheduler Successfully assigned e2e-statefulset-7447/ss2-0 to ostest-n5rnf-worker-0-8kq82 Normal AddedInterface 3s multus Add eth0 [10.128.179.60/23] from kuryr Normal Pulling 3s kubelet Pulling image "quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm" Oct 13 09:13:55.827: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-statefulset-7447 logs ss2-0 --tail=100' Oct 13 09:13:55.994: INFO: rc: 1 Oct 13 09:13:55.994: INFO: Last 100 log lines of ss2-0: Oct 13 09:13:55.994: INFO: Deleting all statefulset in ns e2e-statefulset-7447 Oct 13 09:13:55.998: INFO: Scaling statefulset ss2 to 0 Oct 13 09:14:06.023: INFO: Waiting for statefulset status.replicas updated to 0 Oct 13 09:14:06.027: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "e2e-statefulset-7447". STEP: Found 34 events. Oct 13 09:14:06.062: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: { } Scheduled: Successfully assigned e2e-statefulset-7447/ss2-0 to ostest-n5rnf-worker-0-j4pkp Oct 13 09:14:06.062: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-0: { } Scheduled: Successfully assigned e2e-statefulset-7447/ss2-0 to ostest-n5rnf-worker-0-8kq82 Oct 13 09:14:06.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-1: { } Scheduled: Successfully assigned e2e-statefulset-7447/ss2-1 to ostest-n5rnf-worker-0-8kq82 Oct 13 09:14:06.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss2-2: { } Scheduled: Successfully assigned e2e-statefulset-7447/ss2-2 to ostest-n5rnf-worker-0-8kq82 Oct 13 09:14:06.063: INFO: At 2022-10-13 09:12:15 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:04 +0000 UTC - event for ss2-0: {multus } AddedInterface: Add eth0 [10.128.179.227/23] from kuryr Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:04 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj" Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:15 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:15 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:15 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:15 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj" in 10.243152958s Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:32 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:32 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:32 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj" already present on machine Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:32 +0000 UTC - event for ss2-1: {multus } AddedInterface: Add eth0 [10.128.179.60/23] from kuryr Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:33 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:37 +0000 UTC - event for ss2-2: {multus } AddedInterface: Add eth0 [10.128.178.210/23] from kuryr Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:37 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:37 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:37 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-15-k8s-gcr-io-e2e-test-images-httpd-2-4-38-1-IML2TQPIHpWx2svj" already present on machine Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:38 +0000 UTC - event for test: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint e2e-statefulset-7447/test: Operation cannot be fulfilled on endpoints "test": the object has been modified; please apply your changes to the latest version and try again Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Unhealthy: Readiness probe failed: Get "http://10.128.179.227:80/index.html": read tcp 10.196.0.199:45820->10.128.179.227:80: read: connection reset by peer Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "40fc3350-2465-40fb-ad19-e183eab52541" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_ss2-0_e2e-statefulset-7447_40fc3350-2465-40fb-ad19-e183eab52541_0(6d7eabfa390111c51f2d272b1725729ccf8e68ce430628bd0452724355514061): error removing pod e2e-statefulset-7447_ss2-0 from CNI network \"multus-cni-network\": delegateDel: error invoking ConflistDel - \"kuryr\": conflistDel: error in getting result from DelNetworkList: Looks like http://localhost:5036/delNetwork cannot be reached. Is kuryr-daemon running?; Post \"http://localhost:5036/delNetwork\": dial tcp [::1]:5036: connect: connection refused" Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-j4pkp} Killing: Stopping container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Unhealthy: Readiness probe failed: Get "http://10.128.179.60:80/index.html": dial tcp 10.128.179.60:80: connect: connection refused Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-1: {kubelet ostest-n5rnf-worker-0-8kq82} Killing: Stopping container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:45 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Killing: Stopping container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:47 +0000 UTC - event for ss2-2: {kubelet ostest-n5rnf-worker-0-8kq82} Unhealthy: Readiness probe failed: Get "http://10.128.178.210:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:52 +0000 UTC - event for ss2-0: {multus } AddedInterface: Add eth0 [10.128.179.60/23] from kuryr Oct 13 09:14:06.063: INFO: At 2022-10-13 09:13:52 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm" Oct 13 09:14:06.063: INFO: At 2022-10-13 09:14:02 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:14:02 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-16-k8s-gcr-io-e2e-test-images-httpd-2-4-39-1-n3rCdS4qndowrZLm" in 10.523033128s Oct 13 09:14:06.063: INFO: At 2022-10-13 09:14:03 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container webserver Oct 13 09:14:06.063: INFO: At 2022-10-13 09:14:03 +0000 UTC - event for ss2-0: {kubelet ostest-n5rnf-worker-0-8kq82} Killing: Stopping container webserver Oct 13 09:14:06.066: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 09:14:06.066: INFO: Oct 13 09:14:06.073: INFO: skipping dumping cluster info - cluster too large STEP: Destroying namespace "e2e-statefulset-7447" for this suite. fail [k8s.io/kubernetes@v1.22.1/test/e2e/apps/statefulset.go:557]: Oct 13 09:13:55.642: Failed to delete stateful pod ss2-1 for StatefulSet e2e-statefulset-7447/ss2: pods "ss2-1" not found
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:11.350: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:12:10.205: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:12:10.979887 970120 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:12:10.979: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:12:10.983: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-6893" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:09.882: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:09.561: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:08.847: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:08.528: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:07.419: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:12:07.028: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:11:17.367: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:11:16.986: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:11:16.559: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:11:16.143: INFO: Driver nfs doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:11:15.536: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:11:15.719415 967995 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:11:15.719: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:11:15.724: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-2791" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:11:14.863: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:11:14.387: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:11:06.515: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:56.590: INFO: Driver nfs doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:56.234: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:51.786: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:44.008: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:43.630: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:43.210: INFO: Driver "nfs" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:42.863: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:40.999: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:40.678: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:40.333: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:39.999: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:39.662: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:39.352: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:10:39.047: INFO: Driver nfs doesn't support ext4 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:53.143: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:09:52.631: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:09:52.794831 964905 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:09:52.794: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:09:52.803: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-7407" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:47.903: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:40.773: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:40.397: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:31.352: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:30.825: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:21.117: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:15.537: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:09:00.054: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:53.889: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:26.627: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:22.693: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:22.379: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:22.071: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:21.734: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:21.406: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:21.051: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:20.693: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:19.420: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:19.067: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:18.727: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:18.337: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:17.957: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:08:16.648: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:08:16.787277 961254 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:08:16.787: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:180 Oct 13 09:08:16.790: INFO: Driver "cinder" does not define supported mount option - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-2188" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:182]: Driver "cinder" does not define supported mount option - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:16.082: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:15.762: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:15.412: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:15.084: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:08:04.434: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:2029]: Unexpected error: <*errors.errorString | 0xc0002fcad0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename services Oct 13 09:07:47.194: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:07:47.347584 960519 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:07:47.347: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:749 [It] should be rejected when no endpoints exist [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:1989 STEP: creating a service with no endpoints STEP: creating execpod-noendpoints on node ostest-n5rnf-worker-0-8kq82 Oct 13 09:07:47.379: INFO: Creating new exec pod Oct 13 09:08:13.436: INFO: waiting up to 30s to connect to no-pods:80 STEP: hitting service no-pods:80 from pod execpod-noendpoints on node ostest-n5rnf-worker-0-8kq82 Oct 13 09:08:13.436: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:16.745: INFO: rc: 1 Oct 13 09:08:16.745: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:18.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:22.188: INFO: rc: 1 Oct 13 09:08:22.188: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:22.747: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:26.077: INFO: rc: 1 Oct 13 09:08:26.077: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:26.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:30.077: INFO: rc: 1 Oct 13 09:08:30.077: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:30.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:34.061: INFO: rc: 1 Oct 13 09:08:34.061: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:34.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:38.043: INFO: rc: 1 Oct 13 09:08:38.043: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:38.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:42.059: INFO: rc: 1 Oct 13 09:08:42.059: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:42.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:46.041: INFO: rc: 1 Oct 13 09:08:46.041: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:46.746: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:50.115: INFO: rc: 1 Oct 13 09:08:50.115: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 Oct 13 09:08:50.115: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Oct 13 09:08:53.422: INFO: rc: 1 Oct 13 09:08:53.422: INFO: error didn't contain 'REFUSED', keep trying: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-4369 exec execpod-noendpointsrg62k -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 TIMEOUT command terminated with exit code 1 error: exit status 1 [AfterEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "e2e-services-4369". STEP: Found 5 events. Oct 13 09:08:53.429: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-noendpointsrg62k: { } Scheduled: Successfully assigned e2e-services-4369/execpod-noendpointsrg62k to ostest-n5rnf-worker-0-8kq82 Oct 13 09:08:53.429: INFO: At 2022-10-13 09:08:10 +0000 UTC - event for execpod-noendpointsrg62k: {multus } AddedInterface: Add eth0 [10.128.167.117/23] from kuryr Oct 13 09:08:53.429: INFO: At 2022-10-13 09:08:10 +0000 UTC - event for execpod-noendpointsrg62k: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine Oct 13 09:08:53.429: INFO: At 2022-10-13 09:08:10 +0000 UTC - event for execpod-noendpointsrg62k: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container agnhost-container Oct 13 09:08:53.429: INFO: At 2022-10-13 09:08:10 +0000 UTC - event for execpod-noendpointsrg62k: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container agnhost-container Oct 13 09:08:53.433: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 09:08:53.434: INFO: execpod-noendpointsrg62k ostest-n5rnf-worker-0-8kq82 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:07:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:08:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:08:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 09:07:47 +0000 UTC }] Oct 13 09:08:53.434: INFO: Oct 13 09:08:53.440: INFO: skipping dumping cluster info - cluster too large STEP: Destroying namespace "e2e-services-4369" for this suite. [AfterEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:753 fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:2029]: Unexpected error: <*errors.errorString | 0xc0002fcad0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:46.608: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:39.309: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:08.279: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:07.866: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:07.465: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:07.137: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:06.778: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:06.456: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:06.100: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:07:00.262: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:59.928: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:59.574: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:59.240: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:58.909: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:58.609: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:53.799: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:53.458: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:52.152: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:51.882: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:51.805: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:51.505: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumemode Oct 13 09:06:50.776: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:06:50.946217 957963 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:06:50.946: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352 Oct 13 09:06:50.959: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumemode-6609" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:50.332: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:50.032: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:06:49.721: INFO: Driver "nfs" does not provide raw block - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "nfs" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:54.780: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:54.350: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:51.343: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:51.015: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:18.614: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-fstype Oct 13 09:05:17.924: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:05:18.204920 954842 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:05:18.205: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:75 Oct 13 09:05:18.226: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume FStype [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-fstype-6329" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_fstype.go:76]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:17.178: INFO: Driver csi-hostpath doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver csi-hostpath doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:16.863: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:16.548: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:16.221: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:15.819: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:15.339: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:14.956: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:05:05.255: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 09:04:46.366: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:04:46.501624 953916 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:04:46.501: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provision storage with pvc data source [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:239 Oct 13 09:04:46.505: INFO: Driver "cinder" does not support cloning - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-5731" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/provisioning.go:241]: Driver "cinder" does not support cloning - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:39.479: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:39.162: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Pod Disks k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pod-disks Oct 13 09:04:38.625: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:04:38.776257 953096 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:04:38.776: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks k8s.io/kubernetes@v1.22.1/test/e2e/storage/pd.go:74 [It] should be able to delete a non-existent PD without error [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/pd.go:449 Oct 13 09:04:38.816: INFO: Only supported for providers [gce] (not openstack) [AfterEach] [sig-storage] Pod Disks k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-pod-disks-638" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/pd.go:450]: Only supported for providers [gce] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:38.081: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:07.508: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:07.200: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:06.840: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:06.460: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:06.090: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:04:05.703: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:49.404: INFO: Driver cinder doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:35.159: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:29.352: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:28.939: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:28.943: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:28.586: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:18.060: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:17.727: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:17.361: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:17.036: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:16.681: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:14.422: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:11.533: INFO: Driver local doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:11.198: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:10.850: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:10.515: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:03:10.190: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:56.873: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:56.493: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:02:55.980: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:02:56.104050 949413 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:02:56.104: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:02:56.111: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-6242" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:54.673: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:32.488: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:32.108: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:29.293: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:28.938: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:28.585: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 09:02:28.011: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:02:28.208346 948341 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:02:28.208: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 09:02:28.220: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-426" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:02:19.749: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:54.046: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:48.419: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:47.977: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:47.597: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:47.247: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:46.870: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:40.698: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:40.358: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:38.564: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] vcp-performance [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename vcp-performance Oct 13 09:01:38.045: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:01:38.209319 946240 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:01:38.209: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] vcp-performance [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_perf.go:69 Oct 13 09:01:38.213: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] vcp-performance [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-vcp-performance-8258" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_perf.go:70]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistentvolumereclaim Oct 13 09:01:37.234: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:01:37.398552 946227 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:01:37.398: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:47 [BeforeEach] persistentvolumereclaim:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:54 Oct 13 09:01:37.405: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] persistentvolumereclaim:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:63 STEP: running testCleanupVSpherePersistentVolumeReclaim [AfterEach] [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-persistentvolumereclaim-5708" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/pv_reclaimpolicy.go:55]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:36.731: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:36.368: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:36.039: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:30.186: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:29.884: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:28.755: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pv Oct 13 09:01:28.129: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:01:28.298007 945842 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:01:28.298: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:77 Oct 13 09:01:28.309: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-pv-8251" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:111 Oct 13 09:01:28.326: INFO: AfterEach: Cleaning up test resources Oct 13 09:01:28.326: INFO: pvc is nil Oct 13 09:01:28.326: INFO: pv is nil skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/persistent_volumes-gce.go:85]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:27.487: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:27.115: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:26.740: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:26.370: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:25.982: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:25.545: INFO: Driver nfs doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:25.159: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-disk-format Oct 13 09:01:24.523: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:01:24.697253 945729 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:01:24.697: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:70 Oct 13 09:01:24.704: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-disk-format-8427" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:19.826: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:16.104: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:15.759: INFO: Driver hostPath doesn't support ext4 -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPath doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:15.443: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:13.631: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 09:01:13.102: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 09:01:13.262717 945114 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 09:01:13.262: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 09:01:13.268: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-3972" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:12.577: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:12.259: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:06.867: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:06.477: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:02.109: INFO: Driver "cinder" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:01:01.486: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:48.453: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:48.134: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:37.109: INFO: Driver "csi-hostpath" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:36.785: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:36.462: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:36.048: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:35.662: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:35.212: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:34.821: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:34.407: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:33.071: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 09:00:16.992: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:52.694: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename ephemeral Oct 13 08:59:52.069: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:59:52.286358 941794 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:59:52.286: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support multiple inline ephemeral volumes [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/ephemeral.go:221 Oct 13 08:59:52.293: INFO: Multiple generic ephemeral volumes with immediate binding may cause pod startup failures when the volumes get created in separate topology segments. [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-ephemeral-3943" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/ephemeral.go:224]: Multiple generic ephemeral volumes with immediate binding may cause pod startup failures when the volumes get created in separate topology segments.
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:51.319: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:51.056: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:50.938: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:50.668: INFO: Driver "nfs" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "nfs" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:44.223: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:43.822: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:34.353: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:34.020: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver hostPathSymlink doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:31.928: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:31.569: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-placement Oct 13 08:59:30.920: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:59:31.074904 940969 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:59:31.074: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:55 Oct 13 08:59:31.084: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-placement-2042" for this suite. [AfterEach] [sig-storage] Volume Placement [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:73 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_placement.go:56]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:25.810: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:59:11.372: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:57.241: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:47.880: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:47.606: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:47.409: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:47.152: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:45.951: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:45.609: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 08:58:45.054: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:58:45.238944 939144 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:58:45.239: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing directories when readOnly specified in the volumeSource [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:395 Oct 13 08:58:45.242: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Oct 13 08:58:45.251: INFO: Creating resource for inline volume Oct 13 08:58:45.251: INFO: Driver hostPath on volume type InlineVolume doesn't support readOnly source STEP: Deleting pod Oct 13 08:58:45.251: INFO: Deleting pod "pod-subpath-test-inlinevolume-kfkk" in namespace "e2e-provisioning-3342" [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-3342" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver hostPath on volume type InlineVolume doesn't support readOnly source
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:44.603: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:44.139: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:37.783: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:37.348: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:36.905: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:36.583: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:35.424: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:35.068: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:34.655: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:34.331: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:33.974: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:33.644: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:22.367: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename pv Oct 13 08:58:14.346: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:58:14.525583 937754 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:58:14.525: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63 Oct 13 08:58:14.531: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-pv-8721" for this suite. [AfterEach] [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:112 Oct 13 08:58:14.548: INFO: AfterEach: Cleaning up test resources skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:64]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:13.678: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:13.276: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:58:12.924: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:59.715: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:41.543: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:41.192: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 08:57:40.586: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:57:40.786090 936646 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:57:40.786: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 08:57:40.790: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-1575" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:39.959: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:39.807: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:27.307: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:25.975: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:25.970: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:57:25.595: INFO: Driver "csi-hostpath" does not support FsGroup - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/fsgroupchangepolicy.go:79]: Driver "csi-hostpath" does not support FsGroup - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 08:57:25.307: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:57:25.540730 936118 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:57:25.540: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 08:57:25.548: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-8779" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:56:46.942: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/util.go:133]: Unexpected error: <exec.CodeExitError>: { Err: { s: "error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28", }, Code: 28, } error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip command terminated with exit code 28 error: exit status 28 occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename services Oct 13 08:56:43.103: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:56:43.240489 934378 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:56:43.240: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:749 [It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:924 Oct 13 08:56:43.288: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:56:45.296: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 13 08:56:45.300: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 13 08:56:45.613: INFO: rc: 7 Oct 13 08:56:45.645: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 13 08:56:45.654: INFO: Pod kube-proxy-mode-detector no longer exists Oct 13 08:56:45.654: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode: Command stdout: stderr: + curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode command terminated with exit code 7 error: exit status 7 STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace e2e-services-2073 Oct 13 08:56:45.683: INFO: sourceip-test cluster ip: 172.30.139.16 STEP: Picking 2 Nodes to test whether source IP is preserved or not STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip Oct 13 08:56:45.736: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:56:47.791: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:56:49.747: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:56:51.751: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:56:53.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:56:55.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:56:57.749: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:56:59.746: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:57:01.745: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:57:03.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:57:05.743: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:57:07.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:57:09.744: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:57:11.748: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:57:13.742: INFO: The status of Pod echo-sourceip is Running (Ready = true) STEP: waiting up to 3m0s for service sourceip-test in namespace e2e-services-2073 to expose endpoints map[echo-sourceip:[8080]] Oct 13 08:57:13.761: INFO: successfully validated that service sourceip-test in namespace e2e-services-2073 exposes endpoints map[echo-sourceip:[8080]] STEP: Creating pause pod deployment Oct 13 08:57:13.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Oct 13 08:57:15.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:17.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:19.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:21.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:23.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:25.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:27.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:29.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:31.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:33.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:35.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:37.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:39.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.October, 13, 8, 57, 30, 0, time.Local), LastTransitionTime:time.Date(2022, time.October, 13, 8, 57, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-867d667966\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 13 08:57:41.823: INFO: Waiting up to 2m0s to get response from 172.30.139.16:8080 Oct 13 08:57:41.824: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip' Oct 13 08:58:12.198: INFO: rc: 28 Oct 13 08:58:12.199: INFO: got err: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip command terminated with exit code 28 error: exit status 28, retry until timeout Oct 13 08:58:14.200: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip' Oct 13 08:58:44.601: INFO: rc: 28 Oct 13 08:58:44.601: INFO: got err: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip command terminated with exit code 28 error: exit status 28, retry until timeout Oct 13 08:58:46.602: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip' Oct 13 08:59:16.982: INFO: rc: 28 Oct 13 08:59:16.982: INFO: got err: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip command terminated with exit code 28 error: exit status 28, retry until timeout Oct 13 08:59:18.982: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip' Oct 13 08:59:49.249: INFO: rc: 28 Oct 13 08:59:49.249: INFO: got err: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip command terminated with exit code 28 error: exit status 28, retry until timeout Oct 13 08:59:51.250: INFO: Deleting deployment Oct 13 08:59:51.299: INFO: Cleaning up the echo server pod Oct 13 08:59:51.313: INFO: Cleaning up the sourceip test service [AfterEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "e2e-services-2073". STEP: Found 24 events. Oct 13 08:59:51.366: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for echo-sourceip: { } Scheduled: Successfully assigned e2e-services-2073/echo-sourceip to ostest-n5rnf-worker-0-j4pkp Oct 13 08:59:51.366: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned e2e-services-2073/kube-proxy-mode-detector to ostest-n5rnf-worker-0-j4pkp Oct 13 08:59:51.366: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pause-pod-867d667966-q4kj2: { } Scheduled: Successfully assigned e2e-services-2073/pause-pod-867d667966-q4kj2 to ostest-n5rnf-worker-0-94fxs Oct 13 08:59:51.366: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pause-pod-867d667966-xpjcj: { } Scheduled: Successfully assigned e2e-services-2073/pause-pod-867d667966-xpjcj to ostest-n5rnf-worker-0-8kq82 Oct 13 08:59:51.366: INFO: At 2022-10-13 08:56:43 +0000 UTC - event for kube-proxy-mode-detector: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine Oct 13 08:59:51.366: INFO: At 2022-10-13 08:56:43 +0000 UTC - event for kube-proxy-mode-detector: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container Oct 13 08:59:51.366: INFO: At 2022-10-13 08:56:43 +0000 UTC - event for kube-proxy-mode-detector: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container Oct 13 08:59:51.366: INFO: At 2022-10-13 08:56:46 +0000 UTC - event for kube-proxy-mode-detector: {kubelet ostest-n5rnf-worker-0-j4pkp} Killing: Stopping container agnhost-container Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:10 +0000 UTC - event for echo-sourceip: {multus } AddedInterface: Add eth0 [10.128.164.254/23] from kuryr Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:10 +0000 UTC - event for echo-sourceip: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:10 +0000 UTC - event for echo-sourceip: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:10 +0000 UTC - event for echo-sourceip: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:13 +0000 UTC - event for pause-pod: {deployment-controller } ScalingReplicaSet: Scaled up replica set pause-pod-867d667966 to 2 Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:13 +0000 UTC - event for pause-pod-867d667966: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-867d667966-xpjcj Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:13 +0000 UTC - event for pause-pod-867d667966: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-867d667966-q4kj2 Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:29 +0000 UTC - event for pause-pod-867d667966-xpjcj: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container agnhost-pause Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:29 +0000 UTC - event for pause-pod-867d667966-xpjcj: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container agnhost-pause Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:29 +0000 UTC - event for pause-pod-867d667966-xpjcj: {multus } AddedInterface: Add eth0 [10.128.164.7/23] from kuryr Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:29 +0000 UTC - event for pause-pod-867d667966-xpjcj: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:39 +0000 UTC - event for pause-pod-867d667966-q4kj2: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:39 +0000 UTC - event for pause-pod-867d667966-q4kj2: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container agnhost-pause Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:39 +0000 UTC - event for pause-pod-867d667966-q4kj2: {multus } AddedInterface: Add eth0 [10.128.164.95/23] from kuryr Oct 13 08:59:51.366: INFO: At 2022-10-13 08:57:40 +0000 UTC - event for pause-pod-867d667966-q4kj2: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container agnhost-pause Oct 13 08:59:51.366: INFO: At 2022-10-13 08:59:51 +0000 UTC - event for echo-sourceip: {kubelet ostest-n5rnf-worker-0-j4pkp} Killing: Stopping container agnhost-container Oct 13 08:59:51.374: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 08:59:51.374: INFO: echo-sourceip ostest-n5rnf-worker-0-j4pkp Running 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:56:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:56:45 +0000 UTC }] Oct 13 08:59:51.374: INFO: pause-pod-867d667966-q4kj2 ostest-n5rnf-worker-0-94fxs Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:13 +0000 UTC }] Oct 13 08:59:51.374: INFO: pause-pod-867d667966-xpjcj ostest-n5rnf-worker-0-8kq82 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:57:13 +0000 UTC }] Oct 13 08:59:51.374: INFO: Oct 13 08:59:51.385: INFO: skipping dumping cluster info - cluster too large STEP: Destroying namespace "e2e-services-2073" for this suite. [AfterEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:753 fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/util.go:133]: Unexpected error: <exec.CodeExitError>: { Err: { s: "error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28", }, Code: 28, } error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-2073 exec pause-pod-867d667966-q4kj2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip: Command stdout: stderr: + curl -q -s --connect-timeout 30 172.30.139.16:8080/clientip command terminated with exit code 28 error: exit status 28 occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:56:42.574: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:56:11.396: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:56:00.675: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:56:00.326: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:59.987: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:59.616: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping InlineVolume pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:59.180: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:58.775: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:54.726: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:54.353: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:53.968: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:53.665: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:53.291: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:52.038: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:51.707: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:51.398: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:47.470: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:47.041: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:43.077: INFO: Driver local doesn't support ext4 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:34.484: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename provisioning Oct 13 08:55:33.815: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:55:34.024399 931273 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:55:34.024: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support existing directories when readOnly specified in the volumeSource [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:395 Oct 13 08:55:34.028: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics Oct 13 08:55:34.028: INFO: Creating resource for inline volume Oct 13 08:55:34.028: INFO: Driver emptydir on volume type InlineVolume doesn't support readOnly source STEP: Deleting pod Oct 13 08:55:34.028: INFO: Deleting pod "pod-subpath-test-inlinevolume-5rmt" in namespace "e2e-provisioning-1713" [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-provisioning-1713" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/subpath.go:399]: Driver emptydir on volume type InlineVolume doesn't support readOnly source
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:33.272: INFO: Driver nfs doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver nfs doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:32.951: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:31.733: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-api-machinery] API priority and fairness k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename apf Oct 13 08:55:29.800: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:55:29.943375 930899 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:55:29.943: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that requests can't be drowned out (fairness) [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:185 [AfterEach] [sig-api-machinery] API priority and fairness k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-apf-6694" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/apimachinery/flowcontrol.go:187]: skipping test until flakiness is resolved
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:29.264: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:23.536: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:55:23.182: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/base.go:244]: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:57.938: INFO: Driver cinder doesn't support ntfs -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ntfs -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:57.618: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:57.243: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:56.897: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:18.030: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:17.693: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:17.374: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:17.040: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:12.154: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:11.747: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:11.363: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:09.130: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:08.794: INFO: Driver local doesn't support ext3 -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver local doesn't support ext3 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:08.449: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:08.128: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:54:07.767: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:53:58.797: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-vsan-policy Oct 13 08:53:34.274: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:53:34.448894 926898 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:53:34.448: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:86 Oct 13 08:53:34.453: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-vsan-policy-1504" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_vsan_policy.go:87]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:53:30.180: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:53:14.449: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:53:14.061: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:52.711: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:52.272: INFO: Driver emptydir doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver emptydir doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:51.918: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:51.561: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:37.856: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:37.523: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:37.168: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:36.803: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:33.070: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:32.689: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:32.377: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:32.053: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:52:31.752: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename zone-support Oct 13 08:51:50.336: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:51:50.454722 922710 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:51:50.454: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:106 Oct 13 08:51:50.458: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Zone Support [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-zone-support-2392" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_zone_support.go:107]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:51:49.751: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:51:49.399: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volumemode Oct 13 08:51:48.763: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:51:48.903988 922669 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:51:48.904: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumemode.go:352 Oct 13 08:51:48.913: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volumemode-1636" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:51:48.027: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:51:44.799: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:51:44.353: INFO: Driver cinder doesn't support ext4 -- skipping [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:121]: Driver cinder doesn't support ext4 -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:51:43.973: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:1033]: Unexpected error: <*errors.errorString | 0xc001972380>: { s: "service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename services Oct 13 08:51:22.245: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:51:22.435560 921314 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:51:22.435: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:749 [It] should allow pods to hairpin back to themselves through services [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:1007 STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace e2e-services-1435 Oct 13 08:51:22.466: INFO: hairpin-test cluster ip: 172.30.138.255 STEP: creating a client/server pod Oct 13 08:51:22.543: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:24.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:26.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:28.561: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:30.556: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:32.550: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:34.550: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:36.552: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:38.552: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:40.559: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:42.552: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:44.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:46.550: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:48.577: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:50.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:52.552: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:54.549: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:56.550: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:51:58.557: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:52:00.565: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:52:02.561: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:52:04.553: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:52:06.551: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:52:08.562: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Oct 13 08:52:10.556: INFO: The status of Pod hairpin is Running (Ready = true) STEP: waiting for the service to expose an endpoint STEP: waiting up to 3m0s for service hairpin-test in namespace e2e-services-1435 to expose endpoints map[hairpin:[8080]] Oct 13 08:52:10.586: INFO: successfully validated that service hairpin-test in namespace e2e-services-1435 exposes endpoints map[hairpin:[8080]] STEP: Checking if the pod can reach itself Oct 13 08:52:11.588: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:13.903: INFO: rc: 1 Oct 13 08:52:13.903: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:14.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:17.205: INFO: rc: 1 Oct 13 08:52:17.205: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:17.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:20.225: INFO: rc: 1 Oct 13 08:52:20.225: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:20.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:23.181: INFO: rc: 1 Oct 13 08:52:23.181: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:23.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:26.214: INFO: rc: 1 Oct 13 08:52:26.214: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:26.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:29.190: INFO: rc: 1 Oct 13 08:52:29.190: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:29.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:32.188: INFO: rc: 1 Oct 13 08:52:32.188: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:32.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:35.185: INFO: rc: 1 Oct 13 08:52:35.185: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:35.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:38.232: INFO: rc: 1 Oct 13 08:52:38.232: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:38.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:41.265: INFO: rc: 1 Oct 13 08:52:41.265: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:41.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:44.234: INFO: rc: 1 Oct 13 08:52:44.234: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:44.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:47.205: INFO: rc: 1 Oct 13 08:52:47.205: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:47.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:50.187: INFO: rc: 1 Oct 13 08:52:50.187: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:50.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:53.312: INFO: rc: 1 Oct 13 08:52:53.312: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:53.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:56.248: INFO: rc: 1 Oct 13 08:52:56.248: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:56.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:52:59.274: INFO: rc: 1 Oct 13 08:52:59.274: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:52:59.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:02.216: INFO: rc: 1 Oct 13 08:53:02.216: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:02.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:05.199: INFO: rc: 1 Oct 13 08:53:05.199: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:05.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:08.205: INFO: rc: 1 Oct 13 08:53:08.205: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:08.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:11.199: INFO: rc: 1 Oct 13 08:53:11.199: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:11.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:14.240: INFO: rc: 1 Oct 13 08:53:14.240: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:14.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:17.288: INFO: rc: 1 Oct 13 08:53:17.288: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:17.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:20.194: INFO: rc: 1 Oct 13 08:53:20.194: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:20.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:23.191: INFO: rc: 1 Oct 13 08:53:23.192: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:23.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:26.226: INFO: rc: 1 Oct 13 08:53:26.226: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:26.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:29.213: INFO: rc: 1 Oct 13 08:53:29.213: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:29.903: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:32.248: INFO: rc: 1 Oct 13 08:53:32.248: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:32.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:35.255: INFO: rc: 1 Oct 13 08:53:35.255: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:35.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:38.262: INFO: rc: 1 Oct 13 08:53:38.262: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:38.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:41.276: INFO: rc: 1 Oct 13 08:53:41.276: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:41.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:44.343: INFO: rc: 1 Oct 13 08:53:44.343: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:44.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:47.223: INFO: rc: 1 Oct 13 08:53:47.223: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:47.905: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:50.210: INFO: rc: 1 Oct 13 08:53:50.210: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:50.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:53.218: INFO: rc: 1 Oct 13 08:53:53.218: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:53.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:56.190: INFO: rc: 1 Oct 13 08:53:56.190: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:56.908: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:53:59.234: INFO: rc: 1 Oct 13 08:53:59.234: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:53:59.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:54:02.243: INFO: rc: 1 Oct 13 08:54:02.243: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:54:02.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:54:05.216: INFO: rc: 1 Oct 13 08:54:05.216: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + + nc -v -techo -w hostName 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:54:05.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:54:08.282: INFO: rc: 1 Oct 13 08:54:08.282: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:54:08.903: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:54:11.185: INFO: rc: 1 Oct 13 08:54:11.185: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:54:11.904: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:54:14.300: INFO: rc: 1 Oct 13 08:54:14.300: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + nc -v -t -w 2 hairpin-test 8080 + echo hostName nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Oct 13 08:54:14.300: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Oct 13 08:54:16.622: INFO: rc: 1 Oct 13 08:54:16.623: INFO: Service reachability failing with error: error running /usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-services-1435 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080: Command stdout: stderr: + echo hostName + nc -v -t -w 2 hairpin-test 8080 nc: connect to hairpin-test port 8080 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... [AfterEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "e2e-services-1435". STEP: Found 5 events. Oct 13 08:54:16.638: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hairpin: { } Scheduled: Successfully assigned e2e-services-1435/hairpin to ostest-n5rnf-worker-0-94fxs Oct 13 08:54:16.638: INFO: At 2022-10-13 08:52:07 +0000 UTC - event for hairpin: {multus } AddedInterface: Add eth0 [10.128.174.191/23] from kuryr Oct 13 08:54:16.638: INFO: At 2022-10-13 08:52:08 +0000 UTC - event for hairpin: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-k8s-gcr-io-e2e-test-images-agnhost-2-32-_wCOtsOr37BcGgzf" already present on machine Oct 13 08:54:16.638: INFO: At 2022-10-13 08:52:08 +0000 UTC - event for hairpin: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container agnhost-container Oct 13 08:54:16.638: INFO: At 2022-10-13 08:52:08 +0000 UTC - event for hairpin: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container agnhost-container Oct 13 08:54:16.642: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 08:54:16.642: INFO: hairpin ostest-n5rnf-worker-0-94fxs Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:51:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:52:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:52:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 08:51:22 +0000 UTC }] Oct 13 08:54:16.642: INFO: Oct 13 08:54:16.654: INFO: skipping dumping cluster info - cluster too large STEP: Destroying namespace "e2e-services-1435" for this suite. [AfterEach] [sig-network] Services k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:753 fail [k8s.io/kubernetes@v1.22.1/test/e2e/network/service.go:1033]: Unexpected error: <*errors.errorString | 0xc001972380>: { s: "service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:28.617: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:26.424: INFO: Only supported for providers [azure] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1567]: Only supported for providers [azure] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:26.069: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:25.663: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:25.218: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:24.974: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:24.761: INFO: Driver "local" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:113]: Driver "local" does not provide raw block - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:24.311: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:24.128: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:23.965: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:23.759: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] capacity k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume-disk-format Oct 13 08:50:24.335: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:50:24.560161 919226 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:50:24.560: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:70 Oct 13 08:50:24.564: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [sig-storage] Volume Disk Format [Feature:vsphere] k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-disk-format-3988" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/vsphere/vsphere_volume_diskformat.go:71]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:23.532: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:23.120: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume Oct 13 08:50:23.725: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:50:24.180933 919158 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:50:24.181: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow exec of files on the volume [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:196 Oct 13 08:50:24.188: INFO: Driver "csi-hostpath" does not support exec - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-volume-7948" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volumes.go:106]: Driver "csi-hostpath" does not support exec - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:23.083: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-storage] CSI mock volume k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename csi-mock-volumes Oct 13 08:50:23.383: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:50:23.631512 919134 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:50:23.631: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1765 STEP: Building a driver namespace object, basename e2e-csi-mock-volumes-7587 Oct 13 08:50:24.222: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 13 08:50:24.496: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-attacher Oct 13 08:50:24.534: INFO: creating *v1.ClusterRole: external-attacher-runner-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.534: INFO: Define cluster role external-attacher-runner-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.549: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.569: INFO: creating *v1.Role: e2e-csi-mock-volumes-7587-5329/external-attacher-cfg-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.584: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-attacher-role-cfg Oct 13 08:50:24.607: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-provisioner Oct 13 08:50:24.640: INFO: creating *v1.ClusterRole: external-provisioner-runner-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.640: INFO: Define cluster role external-provisioner-runner-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.653: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.665: INFO: creating *v1.Role: e2e-csi-mock-volumes-7587-5329/external-provisioner-cfg-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.691: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-provisioner-role-cfg Oct 13 08:50:24.706: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-resizer Oct 13 08:50:24.715: INFO: creating *v1.ClusterRole: external-resizer-runner-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.716: INFO: Define cluster role external-resizer-runner-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.760: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.777: INFO: creating *v1.Role: e2e-csi-mock-volumes-7587-5329/external-resizer-cfg-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.800: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-resizer-role-cfg Oct 13 08:50:24.830: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-snapshotter Oct 13 08:50:24.839: INFO: creating *v1.ClusterRole: external-snapshotter-runner-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.839: INFO: Define cluster role external-snapshotter-runner-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.867: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.888: INFO: creating *v1.Role: e2e-csi-mock-volumes-7587-5329/external-snapshotter-leaderelection-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.899: INFO: creating *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/external-snapshotter-leaderelection Oct 13 08:50:24.941: INFO: creating *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-mock Oct 13 08:50:24.955: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:24.994: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:25.007: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:25.022: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:25.058: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:25.079: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-e2e-csi-mock-volumes-7587 Oct 13 08:50:25.096: INFO: creating *v1.StorageClass: csi-mock-sc-e2e-csi-mock-volumes-7587 Oct 13 08:50:25.120: INFO: creating *v1.StatefulSet: e2e-csi-mock-volumes-7587-5329/csi-mockplugin Oct 13 08:50:25.153: INFO: creating *v1.CSIDriver: csi-mock-e2e-csi-mock-volumes-7587 Oct 13 08:50:25.167: INFO: creating *v1.StatefulSet: e2e-csi-mock-volumes-7587-5329/csi-mockplugin-snapshotter Oct 13 08:50:25.184: INFO: waiting up to 4m0s for CSIDriver "csi-mock-e2e-csi-mock-volumes-7587" Oct 13 08:50:25.201: INFO: waiting for CSIDriver csi-mock-e2e-csi-mock-volumes-7587 to register on node ostest-n5rnf-worker-0-j4pkp W1013 08:51:29.760407 919134 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from W1013 08:51:29.760437 919134 metrics_grabber.go:151] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled. Oct 13 08:51:29.760: INFO: Snapshot controller metrics not found -- skipping STEP: Cleaning up resources STEP: deleting the test namespace: e2e-csi-mock-volumes-7587 STEP: Waiting for namespaces [e2e-csi-mock-volumes-7587] to vanish STEP: uninstalling csi mock driver Oct 13 08:52:01.850: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-attacher Oct 13 08:52:01.864: INFO: deleting *v1.ClusterRole: external-attacher-runner-e2e-csi-mock-volumes-7587 Oct 13 08:52:01.883: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:01.925: INFO: deleting *v1.Role: e2e-csi-mock-volumes-7587-5329/external-attacher-cfg-e2e-csi-mock-volumes-7587 Oct 13 08:52:01.962: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-attacher-role-cfg Oct 13 08:52:01.987: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-provisioner Oct 13 08:52:02.005: INFO: deleting *v1.ClusterRole: external-provisioner-runner-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.039: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.065: INFO: deleting *v1.Role: e2e-csi-mock-volumes-7587-5329/external-provisioner-cfg-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.084: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-provisioner-role-cfg Oct 13 08:52:02.108: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-resizer Oct 13 08:52:02.130: INFO: deleting *v1.ClusterRole: external-resizer-runner-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.151: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.180: INFO: deleting *v1.Role: e2e-csi-mock-volumes-7587-5329/external-resizer-cfg-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.199: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/csi-resizer-role-cfg Oct 13 08:52:02.228: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-snapshotter Oct 13 08:52:02.260: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.271: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.304: INFO: deleting *v1.Role: e2e-csi-mock-volumes-7587-5329/external-snapshotter-leaderelection-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.347: INFO: deleting *v1.RoleBinding: e2e-csi-mock-volumes-7587-5329/external-snapshotter-leaderelection Oct 13 08:52:02.365: INFO: deleting *v1.ServiceAccount: e2e-csi-mock-volumes-7587-5329/csi-mock Oct 13 08:52:02.390: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.406: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.421: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.444: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.466: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.497: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.506: INFO: deleting *v1.StorageClass: csi-mock-sc-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.519: INFO: deleting *v1.StatefulSet: e2e-csi-mock-volumes-7587-5329/csi-mockplugin Oct 13 08:52:02.534: INFO: deleting *v1.CSIDriver: csi-mock-e2e-csi-mock-volumes-7587 Oct 13 08:52:02.560: INFO: deleting *v1.StatefulSet: e2e-csi-mock-volumes-7587-5329/csi-mockplugin-snapshotter STEP: deleting the driver namespace: e2e-csi-mock-volumes-7587-5329 STEP: Waiting for namespaces [e2e-csi-mock-volumes-7587-5329] to vanish [AfterEach] [sig-storage] CSI mock volume k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/csi_mock_volume.go:1786]: Snapshot controller metrics not found -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:22.716: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:22.588: INFO: Driver "cinder" does not support volume expansion - skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/volume_expand.go:94]: Driver "cinder" does not support volume expansion - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:22.330: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:22.239: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:22.090: INFO: Only supported for providers [gce gke] (not openstack) [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1302]: Only supported for providers [gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:22.065: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.863: INFO: Driver hostPath doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPath doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.766: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-apps] Deployment k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename deployment Oct 13 08:50:22.269: INFO: About to run a Kube e2e test, ensuring namespace is privileged W1013 08:50:22.463260 918993 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 08:50:22.463: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:89 [It] should not disrupt a cloud load-balancer's connectivity during rollout [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:161 Oct 13 08:50:22.476: INFO: Only supported for providers [aws azure gce gke] (not openstack) [AfterEach] [sig-apps] Deployment k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:83 Oct 13 08:50:22.488: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-deployment-2304" for this suite. skip [k8s.io/kubernetes@v1.22.1/test/e2e/apps/deployment.go:162]: Only supported for providers [aws azure gce gke] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.597: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.562: INFO: Driver "csi-hostpath" does not support topology - skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/testsuites/topology.go:92]: Driver "csi-hostpath" does not support topology - skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.352: INFO: Only supported for providers [aws] (not openstack) [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1711]: Only supported for providers [aws] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.229: INFO: Only supported for providers [vsphere] (not openstack) [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/drivers/in_tree.go:1438]: Only supported for providers [vsphere] (not openstack)
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.366: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.109: INFO: Driver local doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support DynamicPV -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.092: INFO: Driver local doesn't support InlineVolume -- skipping [AfterEach] [Testpattern: Inline-volume (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support InlineVolume -- skipping
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:51 Oct 13 08:50:21.314: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 skip [k8s.io/kubernetes@v1.22.1/test/e2e/storage/framework/testsuite.go:116]: Driver hostPathSymlink doesn't support DynamicPV -- skipping
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:13:38.852: 7 pods found in best-effort QoS: openshift-kuryr/kuryr-cni-2rrvs is running in best-effort QoS openshift-kuryr/kuryr-cni-cjcgk is running in best-effort QoS openshift-kuryr/kuryr-cni-crfvc is running in best-effort QoS openshift-kuryr/kuryr-cni-ndzt5 is running in best-effort QoS openshift-kuryr/kuryr-cni-t448w is running in best-effort QoS openshift-kuryr/kuryr-cni-xzbzv is running in best-effort QoS openshift-kuryr/kuryr-controller-7654df4d98-f2qvz is running in best-effort QoS
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [It] ensure control plane pods do not run in best-effort QoS [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/operators/qos.go:20 Oct 13 10:13:38.852: FAIL: 7 pods found in best-effort QoS: openshift-kuryr/kuryr-cni-2rrvs is running in best-effort QoS openshift-kuryr/kuryr-cni-cjcgk is running in best-effort QoS openshift-kuryr/kuryr-cni-crfvc is running in best-effort QoS openshift-kuryr/kuryr-cni-ndzt5 is running in best-effort QoS openshift-kuryr/kuryr-cni-t448w is running in best-effort QoS openshift-kuryr/kuryr-cni-xzbzv is running in best-effort QoS openshift-kuryr/kuryr-controller-7654df4d98-f2qvz is running in best-effort QoS Full Stack Trace github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000001a0) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113 +0xba github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0030f4e68) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:64 +0x125 github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x7f90603504c8) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/it_node.go:26 +0x7b github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001e73680, 0xc0030f5230, {0x83433a0, 0xc000330940}) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:215 +0x2a9 github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001e73680, {0x83433a0, 0xc000330940}) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:138 +0xe7 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001e2ab40, 0xc001e73680) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:200 +0xe5 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001e2ab40) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:170 +0x1a5 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001e2ab40) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:66 +0xc5 github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00031e780, {0x8343660, 0xc001b60e10}, {0x0, 0x7f90385531b8}, {0xc000f9a010, 0x1, 0x1}, {0x843fe58, 0xc000330940}, ...) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/suite/suite.go:62 +0x4b2 github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc000a3fad0, {0xc00064a7f0, 0xb8fc7b0, 0x457d780}) github.com/openshift/origin/pkg/test/ginkgo/cmd_runtest.go:61 +0x3be main.newRunTestCommand.func1.1() github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x32 github.com/openshift/origin/test/extended/util.WithCleanup(0xc0019bfc18) github.com/openshift/origin/test/extended/util/test.go:168 +0xad main.newRunTestCommand.func1(0xc001d89680, {0xc00064a7f0, 0x1, 0x1}) github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x38a github.com/spf13/cobra.(*Command).execute(0xc001d89680, {0xc00064a7b0, 0x1, 0x1}) github.com/spf13/cobra@v1.1.3/command.go:852 +0x60e github.com/spf13/cobra.(*Command).ExecuteC(0xc001d88c80) github.com/spf13/cobra@v1.1.3/command.go:960 +0x3ad github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.1.3/command.go:897 main.main.func1(0xc000b54700) github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:84 +0x8a main.main() github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:85 +0x3b6 [AfterEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:140 [AfterEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:141 fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:13:38.852: 7 pods found in best-effort QoS: openshift-kuryr/kuryr-cni-2rrvs is running in best-effort QoS openshift-kuryr/kuryr-cni-cjcgk is running in best-effort QoS openshift-kuryr/kuryr-cni-crfvc is running in best-effort QoS openshift-kuryr/kuryr-cni-ndzt5 is running in best-effort QoS openshift-kuryr/kuryr-cni-t448w is running in best-effort QoS openshift-kuryr/kuryr-cni-xzbzv is running in best-effort QoS openshift-kuryr/kuryr-controller-7654df4d98-f2qvz is running in best-effort QoS
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-cli] oc explain networking types github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-cli] oc explain networking types github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:36:43.646: INFO: configPath is now "/tmp/configfile1982570296" Oct 13 10:36:43.646: INFO: The user is now "e2e-test-oc-explain-w8cqf-user" Oct 13 10:36:43.646: INFO: Creating project "e2e-test-oc-explain-w8cqf" Oct 13 10:36:43.884: INFO: Waiting on permissions in project "e2e-test-oc-explain-w8cqf" ... Oct 13 10:36:43.893: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:36:44.012: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:36:44.138: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:36:44.254: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:36:44.266: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:36:44.279: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:36:44.895: INFO: Project "e2e-test-oc-explain-w8cqf" has been fully provisioned. [BeforeEach] when using openshift-sdn github.com/openshift/origin/test/extended/networking/util.go:396 Oct 13 10:36:45.049: INFO: Not using openshift-sdn [AfterEach] [sig-cli] oc explain networking types github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:36:45.075: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-oc-explain-w8cqf-user}, err: <nil> Oct 13 10:36:45.088: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-oc-explain-w8cqf}, err: <nil> Oct 13 10:36:45.101: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~n3wLJS5No_EjiC0c_09c7pFgD1xK1_UpvxQm38M8qzs}, err: <nil> [AfterEach] [sig-cli] oc explain networking types github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-oc-explain-w8cqf" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:398]: Not using openshift-sdn
fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:35:48.384: Daemonsets found that do not meet platform requirements for update strategy: expected daemonset openshift-kuryr/kuryr-cni to have maxUnavailable 10% or 33% (see comment) instead of 1
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-arch] Managed cluster github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [It] should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/operators/daemon_set.go:41 Oct 13 10:35:48.384: INFO: Daemonset configuration in payload: daemonset openshift-cluster-csi-drivers/openstack-cinder-csi-driver-node has 10% daemonset openshift-cluster-node-tuning-operator/tuned has 10% daemonset openshift-dns/dns-default has 10% daemonset openshift-dns/node-resolver has 33% daemonset openshift-image-registry/node-ca has 10% daemonset openshift-ingress-canary/ingress-canary has 10% daemonset openshift-machine-config-operator/machine-config-daemon has 10% daemonset openshift-manila-csi-driver/csi-nodeplugin-nfsplugin has 10% daemonset openshift-manila-csi-driver/openstack-manila-csi-nodeplugin has 10% daemonset openshift-monitoring/node-exporter has 10% daemonset openshift-multus/multus has 10% daemonset openshift-multus/multus-additional-cni-plugins has 10% daemonset openshift-multus/network-metrics-daemon has 33% daemonset openshift-network-diagnostics/network-check-target has 10% expected daemonset openshift-kuryr/kuryr-cni to have maxUnavailable 10% or 33% (see comment) instead of 1 Oct 13 10:35:48.384: FAIL: Daemonsets found that do not meet platform requirements for update strategy: expected daemonset openshift-kuryr/kuryr-cni to have maxUnavailable 10% or 33% (see comment) instead of 1 Full Stack Trace github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000001a0) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113 +0xba github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc002b34e68) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:64 +0x125 github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x7f54a917bfff) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/it_node.go:26 +0x7b github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001f56a50, 0xc002b35230, {0x83433a0, 0xc00038a900}) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:215 +0x2a9 github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001f56a50, {0x83433a0, 0xc00038a900}) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/spec/spec.go:138 +0xe7 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0017ecc80, 0xc001f56a50) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:200 +0xe5 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0017ecc80) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:170 +0x1a5 github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0017ecc80) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/specrunner/spec_runner.go:66 +0xc5 github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000376780, {0x8343660, 0xc000deb270}, {0x0, 0x0}, {0xc000c6e360, 0x1, 0x1}, {0x843fe58, 0xc00038a900}, ...) github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/suite/suite.go:62 +0x4b2 github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc00169c360, {0xc000dfb9b0, 0xb8fc7b0, 0x457d780}) github.com/openshift/origin/pkg/test/ginkgo/cmd_runtest.go:61 +0x3be main.newRunTestCommand.func1.1() github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x32 github.com/openshift/origin/test/extended/util.WithCleanup(0xc001ebfc18) github.com/openshift/origin/test/extended/util/test.go:168 +0xad main.newRunTestCommand.func1(0xc001df1b80, {0xc000dfb9b0, 0x1, 0x1}) github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:426 +0x38a github.com/spf13/cobra.(*Command).execute(0xc001df1b80, {0xc000dfb980, 0x1, 0x1}) github.com/spf13/cobra@v1.1.3/command.go:852 +0x60e github.com/spf13/cobra.(*Command).ExecuteC(0xc001df1180) github.com/spf13/cobra@v1.1.3/command.go:960 +0x3ad github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.1.3/command.go:897 main.main.func1(0xc00196f200) github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:84 +0x8a main.main() github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:85 +0x3b6 [AfterEach] [sig-arch] Managed cluster github.com/openshift/origin/test/extended/util/client.go:140 [AfterEach] [sig-arch] Managed cluster github.com/openshift/origin/test/extended/util/client.go:141 fail [github.com/onsi/ginkgo@v4.7.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Oct 13 10:35:48.384: Daemonsets found that do not meet platform requirements for update strategy: expected daemonset openshift-kuryr/kuryr-cni to have maxUnavailable 10% or 33% (see comment) instead of 1
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:468]: Unexpected error: <errors.aggregate | len:1, cap:1>: [ { s: "promQL query returned unexpected results:\ncontainer_cpu_usage_seconds_total{id!~\"/kubepods.slice/.*\"} >= 1\n[]", }, ] promQL query returned unexpected results: container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1 [] occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/prometheus/prometheus.go:250 [It] should have non-Pod host cAdvisor metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/prometheus/prometheus.go:457 Oct 13 10:35:48.785: INFO: Creating namespace "e2e-test-prometheus-jskqg" Oct 13 10:35:49.101: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:35:49.226: INFO: Creating new exec pod STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1 Oct 13 10:38:13.413: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"' Oct 13 10:38:13.754: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n" Oct 13 10:38:13.754: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1 Oct 13 10:38:23.757: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"' Oct 13 10:38:24.137: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n" Oct 13 10:38:24.137: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1 Oct 13 10:38:34.143: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"' Oct 13 10:38:34.507: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n" Oct 13 10:38:34.507: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1 Oct 13 10:38:44.509: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"' Oct 13 10:38:44.853: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n" Oct 13 10:38:44.853: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1 Oct 13 10:38:54.858: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-jskqg exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1"' Oct 13 10:38:55.391: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=container_cpu_usage_seconds_total%7Bid%21~%22%2Fkubepods.slice%2F.%2A%22%7D+%3E%3D+1'\n" Oct 13 10:38:55.392: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" [AfterEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-prometheus-jskqg". STEP: Found 5 events. Oct 13 10:39:05.622: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-jskqg/execpod to ostest-n5rnf-worker-0-j4pkp Oct 13 10:39:05.622: INFO: At 2022-10-13 10:38:11 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.222.102/23] from kuryr Oct 13 10:39:05.622: INFO: At 2022-10-13 10:38:11 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine Oct 13 10:39:05.622: INFO: At 2022-10-13 10:38:11 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container Oct 13 10:39:05.622: INFO: At 2022-10-13 10:38:11 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container Oct 13 10:39:05.633: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:39:05.633: INFO: execpod ostest-n5rnf-worker-0-j4pkp Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:35:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:38:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:38:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:35:49 +0000 UTC }] Oct 13 10:39:05.633: INFO: Oct 13 10:39:05.653: INFO: skipping dumping cluster info - cluster too large [AfterEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-prometheus-jskqg" for this suite. fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:468]: Unexpected error: <errors.aggregate | len:1, cap:1>: [ { s: "promQL query returned unexpected results:\ncontainer_cpu_usage_seconds_total{id!~\"/kubepods.slice/.*\"} >= 1\n[]", }, ] promQL query returned unexpected results: container_cpu_usage_seconds_total{id!~"/kubepods.slice/.*"} >= 1 [] occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:35:40.357: INFO: configPath is now "/tmp/configfile436598500" Oct 13 10:35:40.357: INFO: The user is now "e2e-test-multicast-ntz2z-user" Oct 13 10:35:40.357: INFO: Creating project "e2e-test-multicast-ntz2z" Oct 13 10:35:40.770: INFO: Waiting on permissions in project "e2e-test-multicast-ntz2z" ... Oct 13 10:35:40.781: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:35:40.916: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:35:41.057: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:35:41.169: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:35:41.183: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:35:41.260: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:35:41.921: INFO: Project "e2e-test-multicast-ntz2z" has been fully provisioned. [BeforeEach] when using one of the OpenshiftSDN modes 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' github.com/openshift/origin/test/extended/networking/util.go:375 Oct 13 10:35:42.342: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used Oct 13 10:35:42.342: INFO: Not using one of the specified OpenshiftSDN modes [AfterEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:35:42.397: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-multicast-ntz2z-user}, err: <nil> Oct 13 10:35:42.453: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-multicast-ntz2z}, err: <nil> Oct 13 10:35:42.492: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~DAnp4EtEmByqJLDhdDKQObzkv1UNUw-4pIaFTmE5_eI}, err: <nil> [AfterEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-multicast-ntz2z" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:34:37.471: INFO: configPath is now "/tmp/configfile3214545341" Oct 13 10:34:37.471: INFO: The user is now "e2e-test-ns-global-jv2w9-user" Oct 13 10:34:37.471: INFO: Creating project "e2e-test-ns-global-jv2w9" Oct 13 10:34:37.726: INFO: Waiting on permissions in project "e2e-test-ns-global-jv2w9" ... Oct 13 10:34:37.739: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:34:37.867: INFO: Waiting for service account "default" secrets () to include dockercfg/token ... Oct 13 10:34:37.947: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:34:38.069: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:34:38.180: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:34:38.191: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:34:38.208: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:34:38.841: INFO: Project "e2e-test-ns-global-jv2w9" has been fully provisioned. [BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default github.com/openshift/origin/test/extended/networking/util.go:350 Oct 13 10:34:39.160: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used Oct 13 10:34:39.160: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:34:39.181: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-ns-global-jv2w9-user}, err: <nil> Oct 13 10:34:39.204: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-ns-global-jv2w9}, err: <nil> Oct 13 10:34:39.229: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~N69OC9mjdQku8C-lluKw1YrhLt77fuH74XMh9zgdtb0}, err: <nil> [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-ns-global-jv2w9" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:33:48.947: INFO: configPath is now "/tmp/configfile3701117800" Oct 13 10:33:48.947: INFO: The user is now "e2e-test-multicast-hsldw-user" Oct 13 10:33:48.947: INFO: Creating project "e2e-test-multicast-hsldw" Oct 13 10:33:49.207: INFO: Waiting on permissions in project "e2e-test-multicast-hsldw" ... Oct 13 10:33:49.225: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:33:49.338: INFO: Waiting for service account "default" secrets (default-dockercfg-z7rtd,default-dockercfg-z7rtd) to include dockercfg/token ... Oct 13 10:33:49.431: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:33:49.548: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:33:49.658: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:33:49.670: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:33:49.682: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:33:50.222: INFO: Project "e2e-test-multicast-hsldw" has been fully provisioned. [BeforeEach] when using one of the OpenshiftSDN modes 'redhat/openshift-ovs-subnet' github.com/openshift/origin/test/extended/networking/util.go:375 Oct 13 10:33:50.511: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used Oct 13 10:33:50.511: INFO: Not using one of the specified OpenshiftSDN modes [AfterEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:33:50.597: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-multicast-hsldw-user}, err: <nil> Oct 13 10:33:50.636: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-multicast-hsldw}, err: <nil> Oct 13 10:33:50.687: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~bt-vq4sHjzx-mxI1hH8wwDVcNmWDv-wEZ8l6p8AdMb4}, err: <nil> [AfterEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-multicast-hsldw" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:32:18.686: INFO: configPath is now "/tmp/configfile3983327930" Oct 13 10:32:18.686: INFO: The user is now "e2e-test-ns-global-d5xlg-user" Oct 13 10:32:18.686: INFO: Creating project "e2e-test-ns-global-d5xlg" Oct 13 10:32:18.918: INFO: Waiting on permissions in project "e2e-test-ns-global-d5xlg" ... Oct 13 10:32:18.928: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:32:19.048: INFO: Waiting for service account "default" to be available: serviceaccounts "default" not found (will retry) ... Oct 13 10:32:19.140: INFO: Waiting for service account "default" secrets () to include dockercfg/token ... Oct 13 10:32:19.260: INFO: Waiting for service account "default" secrets (default-token-dlkvz) to include dockercfg/token ... Oct 13 10:32:19.382: INFO: Waiting for service account "default" secrets (default-token-dlkvz) to include dockercfg/token ... Oct 13 10:32:19.467: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:32:19.580: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:32:19.698: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:32:19.712: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:32:19.745: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:32:20.440: INFO: Project "e2e-test-ns-global-d5xlg" has been fully provisioned. [BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default github.com/openshift/origin/test/extended/networking/util.go:350 Oct 13 10:32:20.913: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used Oct 13 10:32:20.913: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:32:20.945: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-ns-global-d5xlg-user}, err: <nil> Oct 13 10:32:20.971: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-ns-global-d5xlg}, err: <nil> Oct 13 10:32:20.992: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~CqcLbMW5wzS3clKDWTSUEQFc2ycy1HUnZXgjgli3C2k}, err: <nil> [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-ns-global-d5xlg" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network][Feature:Network Policy Audit logging] github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network][Feature:Network Policy Audit logging] github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:32:15.535: INFO: configPath is now "/tmp/configfile2551294290" Oct 13 10:32:15.535: INFO: The user is now "e2e-test-acl-logging-fh7fx-user" Oct 13 10:32:15.535: INFO: Creating project "e2e-test-acl-logging-fh7fx" Oct 13 10:32:15.806: INFO: Waiting on permissions in project "e2e-test-acl-logging-fh7fx" ... Oct 13 10:32:15.819: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:32:15.932: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:32:16.040: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:32:16.149: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:32:16.162: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:32:16.174: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:32:16.783: INFO: Project "e2e-test-acl-logging-fh7fx" has been fully provisioned. [BeforeEach] when using openshift ovn-kubernetes github.com/openshift/origin/test/extended/networking/util.go:410 Oct 13 10:32:16.931: INFO: Not using openshift-sdn [AfterEach] [sig-network][Feature:Network Policy Audit logging] github.com/openshift/origin/test/extended/networking/acl_audit_log.go:32 [AfterEach] [sig-network][Feature:Network Policy Audit logging] github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:32:16.953: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-acl-logging-fh7fx-user}, err: <nil> Oct 13 10:32:16.987: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-acl-logging-fh7fx}, err: <nil> Oct 13 10:32:17.009: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~COJG-WmJ9YYsASzt3fQ9kjLZAY0JstrJV7PSlzxELVc}, err: <nil> [AfterEach] [sig-network][Feature:Network Policy Audit logging] github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-acl-logging-fh7fx" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:412]: Not using openshift-sdn
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:32:11.017: INFO: configPath is now "/tmp/configfile4118421892" Oct 13 10:32:11.017: INFO: The user is now "e2e-test-operators-routable-kbc74-user" Oct 13 10:32:11.017: INFO: Creating project "e2e-test-operators-routable-kbc74" Oct 13 10:32:11.985: INFO: Waiting on permissions in project "e2e-test-operators-routable-kbc74" ... Oct 13 10:32:11.998: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:32:12.124: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:32:12.274: INFO: Waiting for service account "deployer" secrets (deployer-dockercfg-ld9w8,deployer-dockercfg-ld9w8) to include dockercfg/token ... Oct 13 10:32:12.338: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:32:12.446: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:32:12.468: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:32:12.546: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:32:13.232: INFO: Project "e2e-test-operators-routable-kbc74" has been fully provisioned. [BeforeEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/operators/routable.go:34 [AfterEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:32:13.324: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-operators-routable-kbc74-user}, err: <nil> Oct 13 10:32:13.347: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-operators-routable-kbc74}, err: <nil> Oct 13 10:32:13.367: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~U07id3P-s7MxFuISAXEzWg1zBQ0PLfIjILHuf88FNQM}, err: <nil> [AfterEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-operators-routable-kbc74" for this suite. skip [github.com/openshift/origin/test/extended/operators/routable.go:41]: default router is not exposed by a load balancer service
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-devex][Feature:Templates] templateservicebroker bind test github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-devex][Feature:Templates] templateservicebroker bind test github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:29:25.828: INFO: configPath is now "/tmp/configfile659538773" Oct 13 10:29:25.828: INFO: The user is now "e2e-test-templates-pv8wh-user" Oct 13 10:29:25.828: INFO: Creating project "e2e-test-templates-pv8wh" Oct 13 10:29:25.969: INFO: Waiting on permissions in project "e2e-test-templates-pv8wh" ... Oct 13 10:29:25.977: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:29:26.096: INFO: Waiting for service account "default" secrets (default-dockercfg-8zc9g,default-dockercfg-8zc9g) to include dockercfg/token ... Oct 13 10:29:26.184: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:29:26.306: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:29:26.429: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:29:26.452: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:29:26.461: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:29:27.030: INFO: Project "e2e-test-templates-pv8wh" has been fully provisioned. [BeforeEach] [sig-devex][Feature:Templates] templateservicebroker bind test github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:41 Oct 13 10:29:27.043: INFO: The template service broker is not installed: services "apiserver" not found [AfterEach] github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:92 [AfterEach] [sig-devex][Feature:Templates] templateservicebroker bind test github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:29:27.061: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-templates-pv8wh-user}, err: <nil> Oct 13 10:29:27.075: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-templates-pv8wh}, err: <nil> Oct 13 10:29:27.100: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~LJHb-wRH9vbVDXKmwL_QfXDFvr49Fs0yVVzQqHD4QPY}, err: <nil> [AfterEach] [sig-devex][Feature:Templates] templateservicebroker bind test github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-templates-pv8wh" for this suite. skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_bind.go:45]: The template service broker is not installed: services "apiserver" not found
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-installer][Feature:baremetal] Baremetal platform should github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-installer][Feature:baremetal] Baremetal platform should github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:28:32.841: INFO: configPath is now "/tmp/configfile3661483143" Oct 13 10:28:32.841: INFO: The user is now "e2e-test-baremetal-j8qzb-user" Oct 13 10:28:32.841: INFO: Creating project "e2e-test-baremetal-j8qzb" Oct 13 10:28:33.115: INFO: Waiting on permissions in project "e2e-test-baremetal-j8qzb" ... Oct 13 10:28:33.123: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:28:33.233: INFO: Waiting for service account "default" secrets (default-token-82rj9) to include dockercfg/token ... Oct 13 10:28:33.341: INFO: Waiting for service account "default" secrets (default-token-82rj9) to include dockercfg/token ... Oct 13 10:28:33.433: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:28:33.540: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:28:33.651: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:28:33.670: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:28:33.687: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:28:34.223: INFO: Project "e2e-test-baremetal-j8qzb" has been fully provisioned. [It] have a metal3 deployment [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/baremetal/hosts.go:66 STEP: checking platform type Oct 13 10:28:34.236: INFO: No baremetal platform detected [AfterEach] [sig-installer][Feature:baremetal] Baremetal platform should github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:28:34.275: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-baremetal-j8qzb-user}, err: <nil> Oct 13 10:28:34.322: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-baremetal-j8qzb}, err: <nil> Oct 13 10:28:34.339: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~zTJbZV24Emz5eddzpB-651pg0xODXGk1w60L1jJnH2g}, err: <nil> [AfterEach] [sig-installer][Feature:baremetal] Baremetal platform should github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-baremetal-j8qzb" for this suite. skip [github.com/openshift/origin/test/extended/baremetal/hosts.go:29]: No baremetal platform detected
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:454]: Unexpected error: <errors.aggregate | len:6, cap:8>: [ { s: "promQL query returned unexpected results:\nsum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!=\"\",label_node_role_kubernetes_io_master!=\"\"}) > 0\n[]", }, { s: "promQL query returned unexpected results:\nsum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!=\"\",label_node_hyperthread_enabled!=\"\",label_node_role_kubernetes_io_master!=\"\"}) > 0\n[]", }, { s: "promQL query returned unexpected results:\ncluster_infrastructure_provider{type!=\"\"}\n[]", }, { s: "promQL query returned unexpected results:\ncluster_feature_set\n[]", }, { s: "promQL query returned unexpected results:\ncluster_installer{type!=\"\",invoker!=\"\"}\n[]", }, { s: "promQL query returned unexpected results:\ninstance:etcd_object_counts:sum > 0\n[]", }, ] [promQL query returned unexpected results: sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0 [], promQL query returned unexpected results: sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0 [], promQL query returned unexpected results: cluster_infrastructure_provider{type!=""} [], promQL query returned unexpected results: cluster_feature_set [], promQL query returned unexpected results: cluster_installer{type!="",invoker!=""} [], promQL query returned unexpected results: instance:etcd_object_counts:sum > 0 []] occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/prometheus/prometheus.go:250 [It] should have important platform topology metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/prometheus/prometheus.go:430 Oct 13 10:27:46.455: INFO: configPath is now "/tmp/configfile3845938436" Oct 13 10:27:46.455: INFO: The user is now "e2e-test-prometheus-v6dwx-user" Oct 13 10:27:46.455: INFO: Creating project "e2e-test-prometheus-v6dwx" Oct 13 10:27:46.581: INFO: Waiting on permissions in project "e2e-test-prometheus-v6dwx" ... Oct 13 10:27:46.599: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:27:46.720: INFO: Waiting for service account "default" secrets (default-token-ptvlj) to include dockercfg/token ... Oct 13 10:27:46.818: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:27:46.939: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:27:47.086: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:27:47.127: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:27:47.189: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:27:47.917: INFO: Project "e2e-test-prometheus-v6dwx" has been fully provisioned. Oct 13 10:27:47.920: INFO: Creating new exec pod STEP: perform prometheus metric query cluster_feature_set Oct 13 10:29:22.043: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"' Oct 13 10:29:22.484: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n" Oct 13 10:29:22.484: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""} Oct 13 10:29:22.484: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"' Oct 13 10:29:22.942: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n" Oct 13 10:29:22.942: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0 Oct 13 10:29:22.942: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"' Oct 13 10:29:23.355: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n" Oct 13 10:29:23.355: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:29:23.355: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:29:23.739: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:29:23.739: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:29:23.739: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:29:24.072: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:29:24.072: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""} Oct 13 10:29:24.072: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"' Oct 13 10:29:24.431: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n" Oct 13 10:29:24.431: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""} Oct 13 10:29:34.432: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"' Oct 13 10:29:34.843: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n" Oct 13 10:29:34.843: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_feature_set Oct 13 10:29:34.844: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"' Oct 13 10:29:35.233: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n" Oct 13 10:29:35.233: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""} Oct 13 10:29:35.233: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"' Oct 13 10:29:35.631: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n" Oct 13 10:29:35.631: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0 Oct 13 10:29:35.632: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"' Oct 13 10:29:36.043: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n" Oct 13 10:29:36.043: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:29:36.043: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:29:36.494: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:29:36.494: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:29:36.494: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:29:36.914: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:29:36.914: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""} Oct 13 10:29:46.922: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"' Oct 13 10:29:47.403: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n" Oct 13 10:29:47.403: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_feature_set Oct 13 10:29:47.403: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"' Oct 13 10:29:47.870: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n" Oct 13 10:29:47.870: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""} Oct 13 10:29:47.870: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"' Oct 13 10:29:48.235: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n" Oct 13 10:29:48.235: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0 Oct 13 10:29:48.235: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"' Oct 13 10:29:48.737: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n" Oct 13 10:29:48.738: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:29:48.738: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:29:49.181: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:29:49.181: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:29:49.181: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:29:49.812: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:29:49.812: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""} Oct 13 10:29:59.814: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"' Oct 13 10:30:00.264: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n" Oct 13 10:30:00.264: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_feature_set Oct 13 10:30:00.264: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"' Oct 13 10:30:00.740: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n" Oct 13 10:30:00.740: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""} Oct 13 10:30:00.740: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"' Oct 13 10:30:01.219: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n" Oct 13 10:30:01.219: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0 Oct 13 10:30:01.219: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"' Oct 13 10:30:01.672: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n" Oct 13 10:30:01.672: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:30:01.672: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:30:02.047: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:30:02.047: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:30:02.047: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:30:02.532: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:30:02.532: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_feature_set Oct 13 10:30:12.537: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"' Oct 13 10:30:12.890: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n" Oct 13 10:30:12.890: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""} Oct 13 10:30:12.891: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"' Oct 13 10:30:13.264: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n" Oct 13 10:30:13.264: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0 Oct 13 10:30:13.264: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"' Oct 13 10:30:13.625: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n" Oct 13 10:30:13.625: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:30:13.625: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:30:14.100: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:30:14.100: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0 Oct 13 10:30:14.100: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"' Oct 13 10:30:14.637: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n" Oct 13 10:30:14.637: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""} Oct 13 10:30:14.637: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-v6dwx exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"' Oct 13 10:30:15.074: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n" Oct 13 10:30:15.074: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" [AfterEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-prometheus-v6dwx". STEP: Found 5 events. Oct 13 10:30:25.134: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-v6dwx/execpod to ostest-n5rnf-worker-0-j4pkp Oct 13 10:30:25.134: INFO: At 2022-10-13 10:29:20 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.163.122/23] from kuryr Oct 13 10:30:25.134: INFO: At 2022-10-13 10:29:20 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine Oct 13 10:30:25.134: INFO: At 2022-10-13 10:29:20 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container Oct 13 10:30:25.134: INFO: At 2022-10-13 10:29:20 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container Oct 13 10:30:25.144: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:30:25.144: INFO: execpod ostest-n5rnf-worker-0-j4pkp Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:27:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:29:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:29:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:27:48 +0000 UTC }] Oct 13 10:30:25.144: INFO: Oct 13 10:30:25.161: INFO: skipping dumping cluster info - cluster too large Oct 13 10:30:25.208: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-prometheus-v6dwx-user}, err: <nil> Oct 13 10:30:25.253: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-prometheus-v6dwx}, err: <nil> Oct 13 10:30:25.299: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~WN9dG42ISAA-HmrjSwS2VqZ6Yu9Y-l2mowFfKgGpBsI}, err: <nil> [AfterEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-prometheus-v6dwx" for this suite. fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:454]: Unexpected error: <errors.aggregate | len:6, cap:8>: [ { s: "promQL query returned unexpected results:\nsum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!=\"\",label_node_role_kubernetes_io_master!=\"\"}) > 0\n[]", }, { s: "promQL query returned unexpected results:\nsum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!=\"\",label_node_hyperthread_enabled!=\"\",label_node_role_kubernetes_io_master!=\"\"}) > 0\n[]", }, { s: "promQL query returned unexpected results:\ncluster_infrastructure_provider{type!=\"\"}\n[]", }, { s: "promQL query returned unexpected results:\ncluster_feature_set\n[]", }, { s: "promQL query returned unexpected results:\ncluster_installer{type!=\"\",invoker!=\"\"}\n[]", }, { s: "promQL query returned unexpected results:\ninstance:etcd_object_counts:sum > 0\n[]", }, ] [promQL query returned unexpected results: sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0 [], promQL query returned unexpected results: sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0 [], promQL query returned unexpected results: cluster_infrastructure_provider{type!=""} [], promQL query returned unexpected results: cluster_feature_set [], promQL query returned unexpected results: cluster_installer{type!="",invoker!=""} [], promQL query returned unexpected results: instance:etcd_object_counts:sum > 0 []] occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:25:17.708: INFO: configPath is now "/tmp/configfile1979282045" Oct 13 10:25:17.708: INFO: The user is now "e2e-test-router-http2-7mvp9-user" Oct 13 10:25:17.708: INFO: Creating project "e2e-test-router-http2-7mvp9" Oct 13 10:25:18.291: INFO: Waiting on permissions in project "e2e-test-router-http2-7mvp9" ... Oct 13 10:25:18.298: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:25:18.407: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:25:18.534: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:25:18.649: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:25:18.663: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:25:18.675: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:25:19.288: INFO: Project "e2e-test-router-http2-7mvp9" has been fully provisioned. [It] should pass the http2 tests [Suite:openshift/conformance/parallel/minimal] github.com/openshift/origin/test/extended/router/http2.go:90 [AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:25:19.404: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-router-http2-7mvp9-user}, err: <nil> Oct 13 10:25:19.582: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-router-http2-7mvp9}, err: <nil> Oct 13 10:25:19.747: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~ItxIv7v9gNLFZ0KzvDPN8pw_KSjYB_6bGNBfbfwZ1MA}, err: <nil> [AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-router-http2-7mvp9" for this suite. [AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/router/http2.go:73 skip [github.com/openshift/origin/test/extended/router/http2.go:100]: Skip on platforms where the default router is not exposed by a load balancer service.
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:25:15.577: INFO: configPath is now "/tmp/configfile1585300186" Oct 13 10:25:15.577: INFO: The user is now "e2e-test-grpc-interop-pfbzs-user" Oct 13 10:25:15.577: INFO: Creating project "e2e-test-grpc-interop-pfbzs" Oct 13 10:25:15.718: INFO: Waiting on permissions in project "e2e-test-grpc-interop-pfbzs" ... Oct 13 10:25:15.730: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:25:15.847: INFO: Waiting for service account "default" secrets () to include dockercfg/token ... Oct 13 10:25:15.942: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:25:16.054: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:25:16.168: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:25:16.176: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:25:16.199: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:25:16.826: INFO: Project "e2e-test-grpc-interop-pfbzs" has been fully provisioned. [It] should pass the gRPC interoperability tests [Suite:openshift/conformance/parallel/minimal] github.com/openshift/origin/test/extended/router/grpc-interop.go:47 [AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:25:16.867: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-grpc-interop-pfbzs-user}, err: <nil> Oct 13 10:25:16.885: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-grpc-interop-pfbzs}, err: <nil> Oct 13 10:25:16.925: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~trd4xNLHo7L1y4mpZAgagm_tFpmqEJe1km9bJ5CpwYI}, err: <nil> [AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-grpc-interop-pfbzs" for this suite. [AfterEach] [sig-network-edge][Conformance][Area:Networking][Feature:Router] github.com/openshift/origin/test/extended/router/grpc-interop.go:36 skip [github.com/openshift/origin/test/extended/router/grpc-interop.go:57]: Skip on platforms where the default router is not exposed by a load balancer service.
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:25:13.260: INFO: configPath is now "/tmp/configfile1397709763" Oct 13 10:25:13.260: INFO: The user is now "e2e-test-multicast-dxfdl-user" Oct 13 10:25:13.260: INFO: Creating project "e2e-test-multicast-dxfdl" Oct 13 10:25:13.465: INFO: Waiting on permissions in project "e2e-test-multicast-dxfdl" ... Oct 13 10:25:13.474: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:25:13.603: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:25:13.712: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:25:13.821: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:25:13.829: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:25:13.844: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:25:14.469: INFO: Project "e2e-test-multicast-dxfdl" has been fully provisioned. [BeforeEach] when using one of the OpenshiftSDN modes 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' github.com/openshift/origin/test/extended/networking/util.go:375 Oct 13 10:25:14.868: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used Oct 13 10:25:14.868: INFO: Not using one of the specified OpenshiftSDN modes [AfterEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:25:14.907: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-multicast-dxfdl-user}, err: <nil> Oct 13 10:25:14.939: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-multicast-dxfdl}, err: <nil> Oct 13 10:25:14.957: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~UcXkSAzTQZ6CeIwXUDnQsRyCzAaY57AVCiyDPHFg6kg}, err: <nil> [AfterEach] [sig-network] multicast github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-multicast-dxfdl" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:384]: Not using one of the specified OpenshiftSDN modes
flake: Workloads with outstanding bugs: Component downloads has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954866 Component ingress-canary has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954892 Component migrator has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954868 Component network-check-source has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870 Component network-check-target has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [It] ensure platform components have system-* priority class associated [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/pods/priorityclasses.go:20 Oct 13 10:24:59.354: INFO: Workloads with outstanding bugs: Component downloads has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954866 Component ingress-canary has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954892 Component migrator has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954868 Component network-check-source has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870 Component network-check-target has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870 Oct 13 10:24:59.354: INFO: Workloads with outstanding bugs: Component downloads has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954866 Component ingress-canary has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954892 Component migrator has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954868 Component network-check-source has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870 Component network-check-target has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870 [AfterEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:140 [AfterEach] [sig-arch] Managed cluster should github.com/openshift/origin/test/extended/util/client.go:141 flake: Workloads with outstanding bugs: Component downloads has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954866 Component ingress-canary has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954892 Component migrator has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954868 Component network-check-source has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870 Component network-check-target has a bug associated already: https://bugzilla.redhat.com/show_bug.cgi?id=1954870
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:23:36.893: INFO: configPath is now "/tmp/configfile2438031237" Oct 13 10:23:36.894: INFO: The user is now "e2e-test-ns-global-58rkb-user" Oct 13 10:23:36.894: INFO: Creating project "e2e-test-ns-global-58rkb" Oct 13 10:23:37.057: INFO: Waiting on permissions in project "e2e-test-ns-global-58rkb" ... Oct 13 10:23:37.065: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:23:37.172: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:23:37.280: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:23:37.387: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:23:37.395: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:23:37.401: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:23:37.950: INFO: Project "e2e-test-ns-global-58rkb" has been fully provisioned. [BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default github.com/openshift/origin/test/extended/networking/util.go:350 Oct 13 10:23:38.223: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used Oct 13 10:23:38.223: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:23:38.256: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-ns-global-58rkb-user}, err: <nil> Oct 13 10:23:38.276: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-ns-global-58rkb}, err: <nil> Oct 13 10:23:38.294: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~N8KNr1-U5uwIpC27LyIVUUML1Gs6YTtOEbinbetFT3w}, err: <nil> [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-ns-global-58rkb" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-arch] Cluster topology single node tests k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename single-node W1013 10:23:35.866248 95025 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 13 10:23:35.866: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] Verify that OpenShift components deploy one replica in SingleReplica topology mode [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/single_node/topology.go:134 Oct 13 10:23:35.884: INFO: Test is only relevant for single replica topologies [AfterEach] [sig-arch] Cluster topology single node tests k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 STEP: Destroying namespace "e2e-single-node-5693" for this suite. skip [github.com/openshift/origin/test/extended/single_node/topology.go:138]: Test is only relevant for single replica topologies
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:23:33.508: INFO: configPath is now "/tmp/configfile1661420397" Oct 13 10:23:33.508: INFO: The user is now "e2e-test-ns-global-49cmq-user" Oct 13 10:23:33.508: INFO: Creating project "e2e-test-ns-global-49cmq" Oct 13 10:23:33.799: INFO: Waiting on permissions in project "e2e-test-ns-global-49cmq" ... Oct 13 10:23:33.812: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:23:33.924: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:23:34.036: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:23:34.144: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:23:34.157: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:23:34.167: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:23:34.903: INFO: Project "e2e-test-ns-global-49cmq" has been fully provisioned. [BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default github.com/openshift/origin/test/extended/networking/util.go:350 Oct 13 10:23:35.192: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used Oct 13 10:23:35.192: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:23:35.239: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-ns-global-49cmq-user}, err: <nil> Oct 13 10:23:35.282: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-ns-global-49cmq}, err: <nil> Oct 13 10:23:35.306: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~FfkaJk3AfoZzzQdqwaieRzbbtVXXZL00CX0BexLKXeg}, err: <nil> [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-ns-global-49cmq" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
fail [github.com/openshift/origin/test/extended/deployments/deployments.go:561]: Unexpected error: <*errors.errorString | 0xc00216a8f0>: { s: "deployment e2e-test-cli-deployment-dcz78/example-1 failed", } deployment e2e-test-cli-deployment-dcz78/example-1 failed occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:23:31.733: INFO: configPath is now "/tmp/configfile4161650962" Oct 13 10:23:31.733: INFO: The user is now "e2e-test-cli-deployment-dcz78-user" Oct 13 10:23:31.733: INFO: Creating project "e2e-test-cli-deployment-dcz78" Oct 13 10:23:32.006: INFO: Waiting on permissions in project "e2e-test-cli-deployment-dcz78" ... Oct 13 10:23:32.018: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:23:32.127: INFO: Waiting for service account "default" secrets (default-token-vlst4) to include dockercfg/token ... Oct 13 10:23:32.233: INFO: Waiting for service account "default" secrets (default-token-vlst4) to include dockercfg/token ... Oct 13 10:23:32.333: INFO: Waiting for service account "default" secrets (default-token-vlst4) to include dockercfg/token ... Oct 13 10:23:32.437: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:23:32.548: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:23:32.654: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:23:32.662: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:23:32.703: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:23:33.514: INFO: Project "e2e-test-cli-deployment-dcz78" has been fully provisioned. [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [JustBeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/deployments/deployments.go:52 [It] should run a successful deployment with a trigger used by different containers [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/deployments/deployments.go:555 STEP: verifying the deployment is marked complete [AfterEach] with multiple image change triggers github.com/openshift/origin/test/extended/deployments/deployments.go:542 Oct 13 10:25:05.557: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 get dc/example -o yaml' Oct 13 10:25:05.672: INFO: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: creationTimestamp: "2022-10-13T10:23:33Z" generation: 2 labels: app: example name: example namespace: e2e-test-cli-deployment-dcz78 resourceVersion: "955389" uid: 686b7d28-7a36-497c-8565-b485e4ac0c07 spec: replicas: 1 revisionHistoryLimit: 10 selector: app: example strategy: activeDeadlineSeconds: 21600 resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: creationTimestamp: null labels: app: example spec: containers: - command: - /bin/sleep - "100" image: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985 imagePullPolicy: IfNotPresent name: ruby ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - command: - /bin/sleep - "100" image: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985 imagePullPolicy: IfNotPresent name: ruby2 ports: - containerPort: 8081 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 test: false triggers: - type: ConfigChange - imageChangeParams: automatic: true containerNames: - ruby - ruby2 from: kind: ImageStreamTag name: ruby:latest namespace: openshift lastTriggeredImage: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985 type: ImageChange status: availableReplicas: 0 conditions: - lastTransitionTime: "2022-10-13T10:23:33Z" lastUpdateTime: "2022-10-13T10:23:33Z" message: Deployment config does not have minimum availability. status: "False" type: Available - lastTransitionTime: "2022-10-13T10:25:05Z" lastUpdateTime: "2022-10-13T10:25:05Z" message: replication controller "example-1" has failed progressing reason: ProgressDeadlineExceeded status: "False" type: Progressing details: causes: - type: ConfigChange message: config change latestVersion: 1 observedGeneration: 2 replicas: 0 unavailableReplicas: 0 updatedReplicas: 0 Oct 13 10:25:05.715: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 get rc/example-1 -o yaml' Oct 13 10:25:05.878: INFO: apiVersion: v1 kind: ReplicationController metadata: annotations: kubectl.kubernetes.io/desired-replicas: "1" openshift.io/deployer-pod.completed-at: 2022-10-13 10:25:02 +0000 UTC openshift.io/deployer-pod.created-at: 2022-10-13 10:23:34 +0000 UTC openshift.io/deployer-pod.name: example-1-deploy openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: example openshift.io/deployment.phase: Failed openshift.io/deployment.replicas: "0" openshift.io/deployment.status-reason: config change openshift.io/encoded-deployment-config: | {"kind":"DeploymentConfig","apiVersion":"apps.openshift.io/v1","metadata":{"name":"example","namespace":"e2e-test-cli-deployment-dcz78","uid":"686b7d28-7a36-497c-8565-b485e4ac0c07","resourceVersion":"952925","generation":2,"creationTimestamp":"2022-10-13T10:23:33Z","labels":{"app":"example"},"managedFields":[{"manager":"openshift-tests","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:23:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{}},"f:strategy":{"f:activeDeadlineSeconds":{},"f:rollingParams":{".":{},"f:intervalSeconds":{},"f:maxSurge":{},"f:maxUnavailable":{},"f:timeoutSeconds":{},"f:updatePeriodSeconds":{}},"f:type":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"ruby\"}":{".":{},"f:command":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8080,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"ruby2\"}":{".":{},"f:command":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8081,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}},{"manager":"openshift-controller-manager","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"ruby\"}":{"f:image":{}},"k:{\"name\":\"ruby2\"}":{"f:image":{}}}}},"f:triggers":{}}}},{"manager":"openshift-controller-manager","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:23:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:status":{},"f:type":{}}},"f:details":{".":{},"f:causes":{},"f:message":{}},"f:latestVersion":{},"f:observedGeneration":{}}},"subresource":"status"}]},"spec":{"strategy":{"type":"Rolling","rollingParams":{"updatePeriodSeconds":1,"intervalSeconds":1,"timeoutSeconds":600,"maxUnavailable":"25%","maxSurge":"25%"},"resources":{},"activeDeadlineSeconds":21600},"triggers":[{"type":"ConfigChange"},{"type":"ImageChange","imageChangeParams":{"automatic":true,"containerNames":["ruby","ruby2"],"from":{"kind":"ImageStreamTag","namespace":"openshift","name":"ruby:latest"},"lastTriggeredImage":"image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985"}}],"replicas":1,"revisionHistoryLimit":10,"test":false,"selector":{"app":"example"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"example"}},"spec":{"containers":[{"name":"ruby","image":"image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985","command":["/bin/sleep","100"],"ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"ruby2","image":"image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985","command":["/bin/sleep","100"],"ports":[{"containerPort":8081,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}}},"status":{"latestVersion":1,"observedGeneration":1,"replicas":0,"updatedReplicas":0,"availableReplicas":0,"unavailableReplicas":0,"details":{"message":"config change","causes":[{"type":"ConfigChange"}]},"conditions":[{"type":"Available","status":"False","lastUpdateTime":"2022-10-13T10:23:33Z","lastTransitionTime":"2022-10-13T10:23:33Z","message":"Deployment config does not have minimum availability."}]}} creationTimestamp: "2022-10-13T10:23:34Z" generation: 1 labels: app: example openshift.io/deployment-config.name: example name: example-1 namespace: e2e-test-cli-deployment-dcz78 ownerReferences: - apiVersion: apps.openshift.io/v1 blockOwnerDeletion: true controller: true kind: DeploymentConfig name: example uid: 686b7d28-7a36-497c-8565-b485e4ac0c07 resourceVersion: "955387" uid: a63b3f6d-82a3-4f94-a1b2-99e358591507 spec: replicas: 0 selector: app: example deployment: example-1 deploymentconfig: example template: metadata: annotations: openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: example openshift.io/deployment.name: example-1 creationTimestamp: null labels: app: example deployment: example-1 deploymentconfig: example spec: containers: - command: - /bin/sleep - "100" image: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985 imagePullPolicy: IfNotPresent name: ruby ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - command: - /bin/sleep - "100" image: image-registry.openshift-image-registry.svc:5000/openshift/ruby@sha256:5795fb5f2564d08afd6a02f416cbdb9d558a555b00d8e229959518b88469b985 imagePullPolicy: IfNotPresent name: ruby2 ports: - containerPort: 8081 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: observedGeneration: 1 replicas: 0 Oct 13 10:25:05.878: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 get pod/example-1-deploy -o yaml' Oct 13 10:25:06.054: INFO: apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.198.168" ], "mac": "fa:16:3e:71:ea:6b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.198.168" ], "mac": "fa:16:3e:71:ea:6b", "default": true, "dns": {} }] openshift.io/deployment-config.name: example openshift.io/deployment.name: example-1 openshift.io/scc: restricted creationTimestamp: "2022-10-13T10:23:34Z" finalizers: - kuryr.openstack.org/pod-finalizer labels: openshift.io/deployer-pod-for.name: example-1 name: example-1-deploy namespace: e2e-test-cli-deployment-dcz78 ownerReferences: - apiVersion: v1 kind: ReplicationController name: example-1 uid: a63b3f6d-82a3-4f94-a1b2-99e358591507 resourceVersion: "955384" uid: 1ba3bac1-d508-4b6f-9961-4685ab6e9ef4 spec: activeDeadlineSeconds: 21600 containers: - env: - name: OPENSHIFT_DEPLOYMENT_NAME value: example-1 - name: OPENSHIFT_DEPLOYMENT_NAMESPACE value: e2e-test-cli-deployment-dcz78 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902 imagePullPolicy: IfNotPresent name: deployment resources: {} securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1012610000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-7x56n readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: deployer-dockercfg-zdckm nodeName: ostest-n5rnf-worker-0-94fxs preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: fsGroup: 1012610000 seLinuxOptions: level: s0:c112,c89 serviceAccount: deployer serviceAccountName: deployer shareProcessNamespace: false terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-7x56n projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-13T10:23:34Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-13T10:25:03Z" message: 'containers with unready status: [deployment]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-13T10:25:03Z" message: 'containers with unready status: [deployment]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-13T10:23:34Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://72e560b2832c1ecfae8fe8d621b0b541a9d4634cd4b300977f86b0f9c09102b6 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902 lastState: {} name: deployment ready: false restartCount: 0 started: false state: terminated: containerID: cri-o://72e560b2832c1ecfae8fe8d621b0b541a9d4634cd4b300977f86b0f9c09102b6 exitCode: 1 finishedAt: "2022-10-13T10:25:02Z" reason: Error startedAt: "2022-10-13T10:24:32Z" hostIP: 10.196.2.169 phase: Failed podIP: 10.128.198.168 podIPs: - ip: 10.128.198.168 qosClass: BestEffort startTime: "2022-10-13T10:23:34Z" Oct 13 10:25:06.054: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 logs pod/example-1-deploy --timestamps=true' Oct 13 10:25:06.292: INFO: --- pod example-1-deploy logs 2022-10-13T10:25:02.394050992Z error: couldn't get deployment example-1: Get "https://172.30.0.1:443/api/v1/namespaces/e2e-test-cli-deployment-dcz78/replicationcontrollers/example-1": dial tcp 172.30.0.1:443: i/o timeout--- Oct 13 10:25:06.292: INFO: Running 'oc --namespace=e2e-test-cli-deployment-dcz78 --kubeconfig=/tmp/configfile4161650962 get istag -o wide' Oct 13 10:25:06.442: INFO: No resources found in e2e-test-cli-deployment-dcz78 namespace. [AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/deployments/deployments.go:71 [AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-cli-deployment-dcz78". STEP: Found 6 events. Oct 13 10:25:08.459: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for example-1-deploy: { } Scheduled: Successfully assigned e2e-test-cli-deployment-dcz78/example-1-deploy to ostest-n5rnf-worker-0-94fxs Oct 13 10:25:08.459: INFO: At 2022-10-13 10:23:34 +0000 UTC - event for example: {deploymentconfig-controller } DeploymentCreated: Created new replication controller "example-1" for version 1 Oct 13 10:25:08.459: INFO: At 2022-10-13 10:24:26 +0000 UTC - event for example-1-deploy: {multus } AddedInterface: Add eth0 [10.128.198.168/23] from kuryr Oct 13 10:25:08.459: INFO: At 2022-10-13 10:24:26 +0000 UTC - event for example-1-deploy: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902" already present on machine Oct 13 10:25:08.459: INFO: At 2022-10-13 10:24:32 +0000 UTC - event for example-1-deploy: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container deployment Oct 13 10:25:08.459: INFO: At 2022-10-13 10:24:32 +0000 UTC - event for example-1-deploy: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container deployment Oct 13 10:25:08.466: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:25:08.466: INFO: example-1-deploy ostest-n5rnf-worker-0-94fxs Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:25:03 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:25:03 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:34 +0000 UTC }] Oct 13 10:25:08.466: INFO: Oct 13 10:25:08.473: INFO: skipping dumping cluster info - cluster too large Oct 13 10:25:08.511: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-cli-deployment-dcz78-user}, err: <nil> Oct 13 10:25:08.541: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-cli-deployment-dcz78}, err: <nil> Oct 13 10:25:08.579: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~G6LEfjXpdPKWJarx_XHGLmpW3SVdyff5o2QDe-E5SLk}, err: <nil> [AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-cli-deployment-dcz78" for this suite. fail [github.com/openshift/origin/test/extended/deployments/deployments.go:561]: Unexpected error: <*errors.errorString | 0xc00216a8f0>: { s: "deployment e2e-test-cli-deployment-dcz78/example-1 failed", } deployment e2e-test-cli-deployment-dcz78/example-1 failed occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:23:29.821: INFO: configPath is now "/tmp/configfile3311815544" Oct 13 10:23:29.822: INFO: The user is now "e2e-test-templates-4kzs9-user" Oct 13 10:23:29.822: INFO: Creating project "e2e-test-templates-4kzs9" Oct 13 10:23:29.997: INFO: Waiting on permissions in project "e2e-test-templates-4kzs9" ... Oct 13 10:23:30.004: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:23:30.115: INFO: Waiting for service account "default" secrets (default-dockercfg-7r27l,default-dockercfg-7r27l) to include dockercfg/token ... Oct 13 10:23:30.223: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:23:30.330: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:23:30.437: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:23:30.444: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:23:30.458: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:23:31.103: INFO: Project "e2e-test-templates-4kzs9" has been fully provisioned. [JustBeforeEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:53 Oct 13 10:23:31.112: INFO: The template service broker is not installed: services "apiserver" not found [AfterEach] github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:346 [AfterEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:23:31.137: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-templates-4kzs9-user}, err: <nil> Oct 13 10:23:31.169: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-templates-4kzs9}, err: <nil> Oct 13 10:23:31.185: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~W77J_cG9mxOJuMh3UnMTgu1c--CMvlwpJ6nPc-uZHKU}, err: <nil> [AfterEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-templates-4kzs9" for this suite. [AfterEach] [sig-devex][Feature:Templates] templateservicebroker end-to-end test github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:99 skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:57]: The template service broker is not installed: services "apiserver" not found
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network][Feature:Router] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-network][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:23:21.014: INFO: configPath is now "/tmp/configfile4260852904" Oct 13 10:23:21.015: INFO: The user is now "e2e-test-router-headers-v7kl9-user" Oct 13 10:23:21.015: INFO: Creating project "e2e-test-router-headers-v7kl9" Oct 13 10:23:21.242: INFO: Waiting on permissions in project "e2e-test-router-headers-v7kl9" ... Oct 13 10:23:21.247: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:23:21.361: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:23:21.475: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:23:21.597: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:23:21.605: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:23:21.611: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:23:22.138: INFO: Project "e2e-test-router-headers-v7kl9" has been fully provisioned. [BeforeEach] [sig-network][Feature:Router] github.com/openshift/origin/test/extended/router/headers.go:35 [It] should set Forwarded headers appropriately [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/router/headers.go:48 [AfterEach] [sig-network][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:23:22.341: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-router-headers-v7kl9-user}, err: <nil> Oct 13 10:23:22.355: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-router-headers-v7kl9}, err: <nil> Oct 13 10:23:22.368: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~TqzvAuBmhFUClSuQ6ze87URcnOQdOraoAS9Zksmlf0A}, err: <nil> [AfterEach] [sig-network][Feature:Router] github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-router-headers-v7kl9" for this suite. skip [github.com/openshift/origin/test/extended/router/headers.go:60]: BZ 1772125 -- not verified on platform type "OpenStack"
fail [github.com/openshift/origin/test/extended/deployments/deployments.go:481]: Unexpected error: <*errors.errorString | 0xc001d76e70>: { s: "deployment e2e-test-cli-deployment-rxkqx/tag-images-1 failed", } deployment e2e-test-cli-deployment-rxkqx/tag-images-1 failed occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:22:55.311: INFO: configPath is now "/tmp/configfile3297171011" Oct 13 10:22:55.311: INFO: The user is now "e2e-test-cli-deployment-rxkqx-user" Oct 13 10:22:55.311: INFO: Creating project "e2e-test-cli-deployment-rxkqx" Oct 13 10:22:55.423: INFO: Waiting on permissions in project "e2e-test-cli-deployment-rxkqx" ... Oct 13 10:22:55.433: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:22:55.562: INFO: Waiting for service account "default" secrets (default-token-98p2s) to include dockercfg/token ... Oct 13 10:22:55.650: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:22:55.770: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:22:55.892: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:22:55.904: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:22:55.918: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:22:56.648: INFO: Project "e2e-test-cli-deployment-rxkqx" has been fully provisioned. [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/framework.go:1453 [JustBeforeEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/deployments/deployments.go:52 [It] should successfully tag the deployed image [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/deployments/deployments.go:474 STEP: creating the deployment config fixture STEP: verifying the deployment is marked complete [AfterEach] when tagging images github.com/openshift/origin/test/extended/deployments/deployments.go:470 Oct 13 10:23:54.674: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 get dc/tag-images -o yaml' Oct 13 10:23:54.866: INFO: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: creationTimestamp: "2022-10-13T10:22:56Z" generation: 1 name: tag-images namespace: e2e-test-cli-deployment-rxkqx resourceVersion: "953528" uid: dc3129c1-ed5d-4a82-9435-9ae94f3c1de3 spec: replicas: 1 revisionHistoryLimit: 10 selector: name: tag-images strategy: activeDeadlineSeconds: 21600 recreateParams: post: failurePolicy: Abort tagImages: - containerName: sample-name to: kind: ImageStreamTag name: sample-stream:deployed timeoutSeconds: 600 resources: {} type: Recreate template: metadata: creationTimestamp: null labels: name: tag-images spec: containers: - command: - /bin/sh - -c - sleep 300 image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest imagePullPolicy: IfNotPresent name: sample-name ports: - containerPort: 8080 protocol: TCP resources: limits: cpu: 100m memory: 3Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 1 test: true triggers: - type: ConfigChange status: availableReplicas: 0 conditions: - lastTransitionTime: "2022-10-13T10:22:56Z" lastUpdateTime: "2022-10-13T10:22:56Z" message: Deployment config does not have minimum availability. status: "False" type: Available - lastTransitionTime: "2022-10-13T10:23:54Z" lastUpdateTime: "2022-10-13T10:23:54Z" message: replication controller "tag-images-1" has failed progressing reason: ProgressDeadlineExceeded status: "False" type: Progressing details: causes: - type: ConfigChange message: config change latestVersion: 1 observedGeneration: 1 replicas: 0 unavailableReplicas: 0 updatedReplicas: 0 Oct 13 10:23:54.890: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 get rc/tag-images-1 -o yaml' Oct 13 10:23:55.022: INFO: apiVersion: v1 kind: ReplicationController metadata: annotations: kubectl.kubernetes.io/desired-replicas: "1" openshift.io/deployer-pod.completed-at: 2022-10-13 10:23:52 +0000 UTC openshift.io/deployer-pod.created-at: 2022-10-13 10:22:56 +0000 UTC openshift.io/deployer-pod.name: tag-images-1-deploy openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: tag-images openshift.io/deployment.phase: Failed openshift.io/deployment.replicas: "0" openshift.io/deployment.status-reason: config change openshift.io/encoded-deployment-config: | {"kind":"DeploymentConfig","apiVersion":"apps.openshift.io/v1","metadata":{"name":"tag-images","namespace":"e2e-test-cli-deployment-rxkqx","uid":"dc3129c1-ed5d-4a82-9435-9ae94f3c1de3","resourceVersion":"951669","generation":1,"creationTimestamp":"2022-10-13T10:22:56Z","managedFields":[{"manager":"openshift-controller-manager","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:22:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:details":{".":{},"f:causes":{},"f:message":{}},"f:latestVersion":{}}},"subresource":"status"},{"manager":"openshift-tests","operation":"Update","apiVersion":"apps.openshift.io/v1","time":"2022-10-13T10:22:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:name":{}},"f:strategy":{"f:activeDeadlineSeconds":{},"f:recreateParams":{".":{},"f:post":{".":{},"f:failurePolicy":{},"f:tagImages":{}},"f:timeoutSeconds":{}},"f:type":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:name":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"sample-name\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8080,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"f:test":{},"f:triggers":{}}}}]},"spec":{"strategy":{"type":"Recreate","recreateParams":{"timeoutSeconds":600,"post":{"failurePolicy":"Abort","tagImages":[{"containerName":"sample-name","to":{"kind":"ImageStreamTag","name":"sample-stream:deployed"}}]}},"resources":{},"activeDeadlineSeconds":21600},"triggers":[{"type":"ConfigChange"}],"replicas":1,"revisionHistoryLimit":10,"test":true,"selector":{"name":"tag-images"},"template":{"metadata":{"creationTimestamp":null,"labels":{"name":"tag-images"}},"spec":{"containers":[{"name":"sample-name","image":"image-registry.openshift-image-registry.svc:5000/openshift/tools:latest","command":["/bin/sh","-c","sleep 300"],"ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{"limits":{"cpu":"100m","memory":"3Gi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":1,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}}},"status":{"latestVersion":1,"observedGeneration":0,"replicas":0,"updatedReplicas":0,"availableReplicas":0,"unavailableReplicas":0,"details":{"message":"config change","causes":[{"type":"ConfigChange"}]}}} creationTimestamp: "2022-10-13T10:22:56Z" generation: 1 labels: openshift.io/deployment-config.name: tag-images name: tag-images-1 namespace: e2e-test-cli-deployment-rxkqx ownerReferences: - apiVersion: apps.openshift.io/v1 blockOwnerDeletion: true controller: true kind: DeploymentConfig name: tag-images uid: dc3129c1-ed5d-4a82-9435-9ae94f3c1de3 resourceVersion: "953526" uid: 65d98a2d-5024-4cdf-af3b-a38005c46590 spec: replicas: 0 selector: deployment: tag-images-1 deploymentconfig: tag-images name: tag-images template: metadata: annotations: openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: tag-images openshift.io/deployment.name: tag-images-1 creationTimestamp: null labels: deployment: tag-images-1 deploymentconfig: tag-images name: tag-images spec: containers: - command: - /bin/sh - -c - sleep 300 image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest imagePullPolicy: IfNotPresent name: sample-name ports: - containerPort: 8080 protocol: TCP resources: limits: cpu: 100m memory: 3Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 1 status: observedGeneration: 1 replicas: 0 Oct 13 10:23:55.022: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 get pod/tag-images-1-deploy -o yaml' Oct 13 10:23:55.124: INFO: apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.221.50" ], "mac": "fa:16:3e:02:b8:60", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.221.50" ], "mac": "fa:16:3e:02:b8:60", "default": true, "dns": {} }] openshift.io/deployment-config.name: tag-images openshift.io/deployment.name: tag-images-1 openshift.io/scc: restricted creationTimestamp: "2022-10-13T10:22:56Z" finalizers: - kuryr.openstack.org/pod-finalizer labels: openshift.io/deployer-pod-for.name: tag-images-1 name: tag-images-1-deploy namespace: e2e-test-cli-deployment-rxkqx ownerReferences: - apiVersion: v1 kind: ReplicationController name: tag-images-1 uid: 65d98a2d-5024-4cdf-af3b-a38005c46590 resourceVersion: "953525" uid: feb6640d-b1c9-4424-a87c-df4f0f84043b spec: activeDeadlineSeconds: 21600 containers: - env: - name: OPENSHIFT_DEPLOYMENT_NAME value: tag-images-1 - name: OPENSHIFT_DEPLOYMENT_NAMESPACE value: e2e-test-cli-deployment-rxkqx image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902 imagePullPolicy: IfNotPresent name: deployment resources: {} securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1012510000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mtlb2 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: deployer-dockercfg-gfwfd nodeName: ostest-n5rnf-worker-0-j4pkp preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: fsGroup: 1012510000 seLinuxOptions: level: s0:c112,c39 serviceAccount: deployer serviceAccountName: deployer shareProcessNamespace: false terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-mtlb2 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-13T10:23:52Z" message: 'containers with unready status: [deployment]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-13T10:23:52Z" message: 'containers with unready status: [deployment]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://d1e13160a20d8b4d32052c590aa4e9db0345d863b8cab28925913f238059b5b8 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902 lastState: {} name: deployment ready: false restartCount: 0 started: false state: terminated: containerID: cri-o://d1e13160a20d8b4d32052c590aa4e9db0345d863b8cab28925913f238059b5b8 exitCode: 1 finishedAt: "2022-10-13T10:23:52Z" reason: Error startedAt: "2022-10-13T10:23:22Z" hostIP: 10.196.0.199 phase: Failed podIP: 10.128.221.50 podIPs: - ip: 10.128.221.50 qosClass: BestEffort startTime: "2022-10-13T10:22:56Z" Oct 13 10:23:55.124: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 logs pod/tag-images-1-deploy --timestamps=true' Oct 13 10:23:55.276: INFO: --- pod tag-images-1-deploy logs 2022-10-13T10:23:52.378254866Z error: couldn't get deployment tag-images-1: Get "https://172.30.0.1:443/api/v1/namespaces/e2e-test-cli-deployment-rxkqx/replicationcontrollers/tag-images-1": dial tcp 172.30.0.1:443: i/o timeout--- Oct 13 10:23:55.276: INFO: Running 'oc --namespace=e2e-test-cli-deployment-rxkqx --kubeconfig=/tmp/configfile3297171011 get istag -o wide' Oct 13 10:23:55.396: INFO: No resources found in e2e-test-cli-deployment-rxkqx namespace. [AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/deployments/deployments.go:71 [AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-cli-deployment-rxkqx". STEP: Found 6 events. Oct 13 10:23:57.407: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for tag-images-1-deploy: { } Scheduled: Successfully assigned e2e-test-cli-deployment-rxkqx/tag-images-1-deploy to ostest-n5rnf-worker-0-j4pkp Oct 13 10:23:57.407: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for tag-images: {deploymentconfig-controller } DeploymentCreated: Created new replication controller "tag-images-1" for version 1 Oct 13 10:23:57.407: INFO: At 2022-10-13 10:23:21 +0000 UTC - event for tag-images-1-deploy: {multus } AddedInterface: Add eth0 [10.128.221.50/23] from kuryr Oct 13 10:23:57.407: INFO: At 2022-10-13 10:23:22 +0000 UTC - event for tag-images-1-deploy: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32fdfc88a2e9b8be7b07c5c623cfc2ee75ce69af65c94493f81252ca753e7902" already present on machine Oct 13 10:23:57.407: INFO: At 2022-10-13 10:23:22 +0000 UTC - event for tag-images-1-deploy: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container deployment Oct 13 10:23:57.407: INFO: At 2022-10-13 10:23:22 +0000 UTC - event for tag-images-1-deploy: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container deployment Oct 13 10:23:57.413: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:23:57.413: INFO: tag-images-1-deploy ostest-n5rnf-worker-0-j4pkp Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:52 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:52 +0000 UTC ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC }] Oct 13 10:23:57.413: INFO: Oct 13 10:23:57.419: INFO: skipping dumping cluster info - cluster too large Oct 13 10:23:57.435: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-cli-deployment-rxkqx-user}, err: <nil> Oct 13 10:23:57.448: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-cli-deployment-rxkqx}, err: <nil> Oct 13 10:23:57.475: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~SLEFsh6mB8xKXpk8QzkEVt-sunNz4Ziab7RnRyJ_x_w}, err: <nil> [AfterEach] [sig-apps][Feature:DeploymentConfig] deploymentconfigs github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-cli-deployment-rxkqx" for this suite. fail [github.com/openshift/origin/test/extended/deployments/deployments.go:481]: Unexpected error: <*errors.errorString | 0xc001d76e70>: { s: "deployment e2e-test-cli-deployment-rxkqx/tag-images-1 failed", } deployment e2e-test-cli-deployment-rxkqx/tag-images-1 failed occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-builds][Feature:Builds] clone repository using git:// protocol github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-builds][Feature:Builds] clone repository using git:// protocol github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:22:41.185: INFO: configPath is now "/tmp/configfile458335381" Oct 13 10:22:41.185: INFO: The user is now "e2e-test-build-clone-git-protocol-hm7qz-user" Oct 13 10:22:41.185: INFO: Creating project "e2e-test-build-clone-git-protocol-hm7qz" Oct 13 10:22:41.430: INFO: Waiting on permissions in project "e2e-test-build-clone-git-protocol-hm7qz" ... Oct 13 10:22:41.439: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:22:41.560: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:22:41.674: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:22:41.786: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:22:41.797: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:22:41.823: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:22:42.468: INFO: Project "e2e-test-build-clone-git-protocol-hm7qz" has been fully provisioned. [BeforeEach] github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:17 [JustBeforeEach] github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:21 STEP: waiting for openshift namespace imagestreams Oct 13 10:22:42.468: INFO: Waiting up to 2 minutes for the internal registry hostname to be published Oct 13 10:22:44.549: INFO: the OCM pod logs indicate the build controller was started after the internal registry hostname has been set in the OCM config Oct 13 10:22:44.564: INFO: OCM rollout progressing status reports complete Oct 13 10:22:44.564: INFO: Scanning openshift ImageStreams Oct 13 10:22:54.577: INFO: SamplesOperator at steady state Oct 13 10:22:54.578: INFO: SamplesOperator at steady state Oct 13 10:22:54.578: INFO: Checking language ruby Oct 13 10:22:54.602: INFO: Checking tag {2.5-ubi8 map[description:Build and run Ruby 2.5 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.5/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.5 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.5,ruby tags:builder,ruby version:2.5] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-25:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c602f0 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking tag {2.6 map[description:Build and run Ruby 2.6 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby,hidden version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/ruby-26-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60390 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking tag {2.6-ubi7 map[description:Build and run Ruby 2.6 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-26:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60580 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking tag {2.6-ubi8 map[description:Build and run Ruby 2.6 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-26:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60690 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking tag {2.7 map[description:Build and run Ruby 2.7 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby,hidden version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/ruby-27-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60730 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking tag {2.7-ubi7 map[description:Build and run Ruby 2.7 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60800 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking tag {2.7-ubi8 map[description:Build and run Ruby 2.7 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c608c0 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking tag {3.0-ubi7 map[description:Build and run Ruby 3.0 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/3.0/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 3.0 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:3.0,ruby tags:builder,ruby version:3.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-30:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60980 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking tag {latest map[description:Build and run Ruby applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/tree/master/2.7/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Ruby available on OpenShift, including major version updates. iconClass:icon-ruby openshift.io/display-name:Ruby (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby tags:builder,ruby] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:2.7-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000c60a48 {false false} {Local}} Oct 13 10:22:54.603: INFO: Checking language nodejs Oct 13 10:22:54.618: INFO: Checking tag {12 map[description:Build and run Node.js 12 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/nodejs-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382130 {false false} {Local}} Oct 13 10:22:54.618: INFO: Checking tag {12-ubi7 map[description:Build and run Node.js 12 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/nodejs-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382280 {false false} {Local}} Oct 13 10:22:54.618: INFO: Checking tag {12-ubi8 map[description:Build and run Node.js 12 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0003823d0 {false false} {Local}} Oct 13 10:22:54.618: INFO: Checking tag {14-ubi7 map[description:Build and run Node.js 14 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/nodejs-14:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382530 {false false} {Local}} Oct 13 10:22:54.618: INFO: Checking tag {14-ubi8 map[description:Build and run Node.js 14 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-14:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0003827e0 {false false} {Local}} Oct 13 10:22:54.618: INFO: Checking tag {14-ubi8-minimal map[description:Build and run Node.js 14 applications on UBI 8 Minimal. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 8 Minimal) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-14-minimal:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382a90 {false false} {Local}} Oct 13 10:22:54.618: INFO: Checking tag {latest map[description:Build and run Node.js applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Node.js available on OpenShift, including major version updates. iconClass:icon-nodejs openshift.io/display-name:Node.js (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git supports:nodejs tags:builder,nodejs] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:14-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000382c20 {false false} {Local}} Oct 13 10:22:54.618: INFO: Checking language perl Oct 13 10:22:54.636: INFO: Checking tag {5.26-ubi8 map[description:Build and run Perl 5.26 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.26-mod_fcgid/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.26 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.26,perl tags:builder,perl version:5.26] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/perl-526:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc87a0 {false false} {Local}} Oct 13 10:22:54.636: INFO: Checking tag {5.30 map[description:Build and run Perl 5.30 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl,hidden version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/perl-530-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc8850 {false false} {Local}} Oct 13 10:22:54.636: INFO: Checking tag {5.30-el7 map[description:Build and run Perl 5.30 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/perl-530-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc8980 {false false} {Local}} Oct 13 10:22:54.636: INFO: Checking tag {5.30-ubi8 map[description:Build and run Perl 5.30 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30-mod_fcgid/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/perl-530:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc8a40 {false false} {Local}} Oct 13 10:22:54.636: INFO: Checking tag {latest map[description:Build and run Perl applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30-mod_fcgid/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Perl available on OpenShift, including major version updates. iconClass:icon-perl openshift.io/display-name:Perl (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl tags:builder,perl] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:5.30-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc8b10 {false false} {Local}} Oct 13 10:22:54.636: INFO: Checking language php Oct 13 10:22:54.651: INFO: Checking tag {7.3 map[description:Build and run PHP 7.3 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php,hidden version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/php-73-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc9db0 {false false} {Local}} Oct 13 10:22:54.651: INFO: Checking tag {7.3-ubi7 map[description:Build and run PHP 7.3 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/php-73:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc9e70 {false false} {Local}} Oct 13 10:22:54.651: INFO: Checking tag {7.3-ubi8 map[description:Build and run PHP 7.3 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/php-73:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc9f30 {false false} {Local}} Oct 13 10:22:54.651: INFO: Checking tag {7.4-ubi8 map[description:Build and run PHP 7.4 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.4/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.4 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.4,php tags:builder,php version:7.4] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/php-74:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc000fc9ff0 {false false} {Local}} Oct 13 10:22:54.651: INFO: Checking tag {latest map[description:Build and run PHP applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.4/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of PHP available on OpenShift, including major version updates. iconClass:icon-php openshift.io/display-name:PHP (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php tags:builder,php] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:7.4-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f200b0 {false false} {Local}} Oct 13 10:22:54.651: INFO: Checking language python Oct 13 10:22:54.668: INFO: Checking tag {2.7 map[description:Build and run Python 2.7 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python,hidden version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/python-27-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21650 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking tag {2.7-ubi7 map[description:Build and run Python 2.7 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/python-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f216f0 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking tag {2.7-ubi8 map[description:Build and run Python 2.7 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21790 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking tag {3.6-ubi8 map[description:Build and run Python 3.6 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.6/README.md. iconClass:icon-python openshift.io/display-name:Python 3.6 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.6,python tags:builder,python version:3.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-36:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21840 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking tag {3.8 map[description:Build and run Python 3.8 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python,hidden version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/python-38-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f218e0 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking tag {3.8-ubi7 map[description:Build and run Python 3.8 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/python-38:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21990 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking tag {3.8-ubi8 map[description:Build and run Python 3.8 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-38:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21a40 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking tag {3.9-ubi8 map[description:Build and run Python 3.9 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.9/README.md. iconClass:icon-python openshift.io/display-name:Python 3.9 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.9,python tags:builder,python version:3.9] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-39:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21ae0 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking tag {latest map[description:Build and run Python applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.9/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Python available on OpenShift, including major version updates. iconClass:icon-python openshift.io/display-name:Python (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python tags:builder,python] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:3.9-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001f21bc0 {false false} {Local}} Oct 13 10:22:54.669: INFO: Checking language mysql Oct 13 10:22:54.688: INFO: Checking tag {8.0 map[description:Provides a MySQL 8.0 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 openshift.io/provider-display-name:Red Hat, Inc. tags:mysql,hidden version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/mysql-80-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0013c2490 {false false} {Local}} Oct 13 10:22:54.689: INFO: Checking tag {8.0-el7 map[description:Provides a MySQL 8.0 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/mysql-80-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0013c24f0 {false false} {Local}} Oct 13 10:22:54.689: INFO: Checking tag {8.0-el8 map[description:Provides a MySQL 8.0 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/mysql-80:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0013c2550 {false false} {Local}} Oct 13 10:22:54.689: INFO: Checking tag {latest map[description:Provides a MySQL database on RHEL. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of MySQL available on OpenShift, including major version updates. iconClass:icon-mysql-database openshift.io/display-name:MySQL (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:8.0-el8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0013c25e0 {false false} {Local}} Oct 13 10:22:54.689: INFO: Checking language postgresql Oct 13 10:22:54.710: INFO: Checking tag {10 map[description:Provides a PostgreSQL 10 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Ephemeral) 10 openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql,hidden version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-10-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc100 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {10-el7 map[description:Provides a PostgreSQL 10 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 10 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-10-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc170 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {10-el8 map[description:Provides a PostgreSQL 10 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 10 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-10:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc1e0 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {12 map[description:Provides a PostgreSQL 12 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Ephemeral) 12 openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc250 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {12-el7 map[description:Provides a PostgreSQL 12 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 12 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc2c0 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {12-el8 map[description:Provides a PostgreSQL 12 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 12 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc330 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {13-el7 map[description:Provides a PostgreSQL 13 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 13 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:13] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-13-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc3a0 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {13-el8 map[description:Provides a PostgreSQL 13 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 13 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:13] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-13:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc410 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {9.6-el8 map[description:Provides a PostgreSQL 9.6 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 9.6 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:9.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-96:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc480 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking tag {latest map[description:Provides a PostgreSQL database on RHEL. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of PostgreSQL available on OpenShift, including major version updates. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:13-el8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0020bc508 {false false} {Local}} Oct 13 10:22:54.710: INFO: Checking language jenkins Oct 13 10:22:54.744: INFO: Checking tag {2 map[description:Provides a Jenkins 2.X server on RHEL. For more information about using this container image, including OpenShift considerations, see https://github.com/openshift/jenkins/blob/master/README.md. iconClass:icon-jenkins openshift.io/display-name:Jenkins 2.X openshift.io/provider-display-name:Red Hat, Inc. tags:jenkins version:2.x] &ObjectReference{Kind:DockerImage,Namespace:,Name:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba54e46eebfe50a572eb683ebc0960d5c682635e4640b480c7274bb9fa81e26,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0018eece0 {false false} {Local}} Oct 13 10:22:54.744: INFO: Checking tag {latest map[description:Provides a Jenkins server on RHEL. For more information about using this container image, including OpenShift considerations, see https://github.com/openshift/jenkins/blob/master/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Jenkins available on OpenShift, including major versions updates. iconClass:icon-jenkins openshift.io/display-name:Jenkins (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:jenkins] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:2,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0018eed68 {false false} {Local}} Oct 13 10:22:54.745: INFO: Success! [It] should clone using git:// if no proxy is configured [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:36 [AfterEach] github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:28 [AfterEach] [sig-builds][Feature:Builds] clone repository using git:// protocol github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:22:54.780: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-build-clone-git-protocol-hm7qz-user}, err: <nil> Oct 13 10:22:54.798: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-build-clone-git-protocol-hm7qz}, err: <nil> Oct 13 10:22:54.813: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~N4poXk5eRlIp6hqij4C0cbqcI3dV_o9-qfGC8hhJr-0}, err: <nil> [AfterEach] [sig-builds][Feature:Builds] clone repository using git:// protocol github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-build-clone-git-protocol-hm7qz" for this suite. skip [github.com/openshift/origin/test/extended/builds/clone_git_protocol.go:40]: test disabled due to https://bugzilla.redhat.com/show_bug.cgi?id=2019433 and https://github.blog/2021-09-01-improving-git-protocol-security-github/#git-protocol-troubleshooting: 'The unauthenticated git protocol on port 9418 is no longer supported'
fail [github.com/openshift/origin/test/extended/builds/new_app.go:68]: Unexpected error: <*errors.errorString | 0xc00295bda0>: { s: "The build \"a234567890123456789012345678901234567890123456789012345678-1\" status is \"Failed\"", } The build "a234567890123456789012345678901234567890123456789012345678-1" status is "Failed" occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-builds][Feature:Builds] oc new-app github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-builds][Feature:Builds] oc new-app github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:22:39.300: INFO: configPath is now "/tmp/configfile303236974" Oct 13 10:22:39.300: INFO: The user is now "e2e-test-new-app-wckrp-user" Oct 13 10:22:39.300: INFO: Creating project "e2e-test-new-app-wckrp" Oct 13 10:22:39.579: INFO: Waiting on permissions in project "e2e-test-new-app-wckrp" ... Oct 13 10:22:39.594: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:22:39.702: INFO: Waiting for service account "default" secrets (default-token-cr5gb) to include dockercfg/token ... Oct 13 10:22:39.817: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:22:39.928: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:22:40.036: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:22:40.046: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:22:40.052: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:22:40.580: INFO: Project "e2e-test-new-app-wckrp" has been fully provisioned. [BeforeEach] github.com/openshift/origin/test/extended/builds/new_app.go:32 [JustBeforeEach] github.com/openshift/origin/test/extended/builds/new_app.go:36 STEP: waiting on the local namespace builder/default SAs STEP: waiting for openshift namespace imagestreams Oct 13 10:22:40.793: INFO: Waiting up to 2 minutes for the internal registry hostname to be published Oct 13 10:22:42.870: INFO: the OCM pod logs indicate the build controller was started after the internal registry hostname has been set in the OCM config Oct 13 10:22:42.883: INFO: OCM rollout progressing status reports complete Oct 13 10:22:42.883: INFO: Scanning openshift ImageStreams Oct 13 10:22:52.909: INFO: SamplesOperator at steady state Oct 13 10:22:52.909: INFO: SamplesOperator at steady state Oct 13 10:22:52.909: INFO: Checking language ruby Oct 13 10:22:52.954: INFO: Checking tag {2.5-ubi8 map[description:Build and run Ruby 2.5 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.5/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.5 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.5,ruby tags:builder,ruby version:2.5] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-25:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9430 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking tag {2.6 map[description:Build and run Ruby 2.6 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby,hidden version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/ruby-26-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f94d0 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking tag {2.6-ubi7 map[description:Build and run Ruby 2.6 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-26:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f95b0 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking tag {2.6-ubi8 map[description:Build and run Ruby 2.6 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.6/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.6 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.6,ruby tags:builder,ruby version:2.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-26:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9670 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking tag {2.7 map[description:Build and run Ruby 2.7 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby,hidden version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/ruby-27-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9710 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking tag {2.7-ubi7 map[description:Build and run Ruby 2.7 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f97d0 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking tag {2.7-ubi8 map[description:Build and run Ruby 2.7 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.7/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 2.7 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:2.7,ruby tags:builder,ruby version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/ruby-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9890 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking tag {3.0-ubi7 map[description:Build and run Ruby 3.0 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/3.0/README.md. iconClass:icon-ruby openshift.io/display-name:Ruby 3.0 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby:3.0,ruby tags:builder,ruby version:3.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/ruby-30:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9950 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking tag {latest map[description:Build and run Ruby applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/tree/master/2.7/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Ruby available on OpenShift, including major version updates. iconClass:icon-ruby openshift.io/display-name:Ruby (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/ruby-ex.git supports:ruby tags:builder,ruby] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:2.7-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024f9a18 {false false} {Local}} Oct 13 10:22:52.954: INFO: Checking language nodejs Oct 13 10:22:52.988: INFO: Checking tag {12 map[description:Build and run Node.js 12 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/nodejs-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c34f80 {false false} {Local}} Oct 13 10:22:52.988: INFO: Checking tag {12-ubi7 map[description:Build and run Node.js 12 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/nodejs-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35000 {false false} {Local}} Oct 13 10:22:52.988: INFO: Checking tag {12-ubi8 map[description:Build and run Node.js 12 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/12/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 12 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35090 {false false} {Local}} Oct 13 10:22:52.988: INFO: Checking tag {14-ubi7 map[description:Build and run Node.js 14 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs,hidden version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/nodejs-14:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35110 {false false} {Local}} Oct 13 10:22:52.988: INFO: Checking tag {14-ubi8 map[description:Build and run Node.js 14 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-14:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c351a0 {false false} {Local}} Oct 13 10:22:52.988: INFO: Checking tag {14-ubi8-minimal map[description:Build and run Node.js 14 applications on UBI 8 Minimal. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. iconClass:icon-nodejs openshift.io/display-name:Node.js 14 (UBI 8 Minimal) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git tags:builder,nodejs version:14] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/nodejs-14-minimal:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35240 {false false} {Local}} Oct 13 10:22:52.988: INFO: Checking tag {latest map[description:Build and run Node.js applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-nodejs-container/blob/master/14/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Node.js available on OpenShift, including major version updates. iconClass:icon-nodejs openshift.io/display-name:Node.js (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/nodejs-ex.git supports:nodejs tags:builder,nodejs] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:14-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc002c35310 {false false} {Local}} Oct 13 10:22:52.988: INFO: Checking language perl Oct 13 10:22:53.002: INFO: Checking tag {5.26-ubi8 map[description:Build and run Perl 5.26 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.26-mod_fcgid/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.26 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.26,perl tags:builder,perl version:5.26] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/perl-526:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761a20 {false false} {Local}} Oct 13 10:22:53.003: INFO: Checking tag {5.30 map[description:Build and run Perl 5.30 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl,hidden version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/perl-530-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761b20 {false false} {Local}} Oct 13 10:22:53.003: INFO: Checking tag {5.30-el7 map[description:Build and run Perl 5.30 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/perl-530-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761cb0 {false false} {Local}} Oct 13 10:22:53.003: INFO: Checking tag {5.30-ubi8 map[description:Build and run Perl 5.30 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30-mod_fcgid/README.md. iconClass:icon-perl openshift.io/display-name:Perl 5.30 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl:5.30,perl tags:builder,perl version:5.30] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/perl-530:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761d80 {false false} {Local}} Oct 13 10:22:53.003: INFO: Checking tag {latest map[description:Build and run Perl applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-perl-container/blob/master/5.30-mod_fcgid/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Perl available on OpenShift, including major version updates. iconClass:icon-perl openshift.io/display-name:Perl (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/dancer-ex.git supports:perl tags:builder,perl] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:5.30-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc001761e50 {false false} {Local}} Oct 13 10:22:53.003: INFO: Checking language php Oct 13 10:22:53.015: INFO: Checking tag {7.3 map[description:Build and run PHP 7.3 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php,hidden version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/php-73-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d0a0 {false false} {Local}} Oct 13 10:22:53.016: INFO: Checking tag {7.3-ubi7 map[description:Build and run PHP 7.3 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/php-73:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d160 {false false} {Local}} Oct 13 10:22:53.016: INFO: Checking tag {7.3-ubi8 map[description:Build and run PHP 7.3 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.3/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.3 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.3,php tags:builder,php version:7.3] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/php-73:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d220 {false false} {Local}} Oct 13 10:22:53.016: INFO: Checking tag {7.4-ubi8 map[description:Build and run PHP 7.4 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.4/README.md. iconClass:icon-php openshift.io/display-name:PHP 7.4 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php:7.4,php tags:builder,php version:7.4] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/php-74:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d2e0 {false false} {Local}} Oct 13 10:22:53.016: INFO: Checking tag {latest map[description:Build and run PHP applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.4/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of PHP available on OpenShift, including major version updates. iconClass:icon-php openshift.io/display-name:PHP (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/cakephp-ex.git supports:php tags:builder,php] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:7.4-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc00245d3a0 {false false} {Local}} Oct 13 10:22:53.016: INFO: Checking language python Oct 13 10:22:53.030: INFO: Checking tag {2.7 map[description:Build and run Python 2.7 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python,hidden version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/python-27-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8180 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking tag {2.7-ubi7 map[description:Build and run Python 2.7 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/python-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8220 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking tag {2.7-ubi8 map[description:Build and run Python 2.7 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/2.7/README.md. iconClass:icon-python openshift.io/display-name:Python 2.7 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:2.7,python tags:builder,python version:2.7] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-27:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d82c0 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking tag {3.6-ubi8 map[description:Build and run Python 3.6 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.6/README.md. iconClass:icon-python openshift.io/display-name:Python 3.6 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.6,python tags:builder,python version:3.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-36:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8360 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking tag {3.8 map[description:Build and run Python 3.8 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python,hidden version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/python-38-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8400 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking tag {3.8-ubi7 map[description:Build and run Python 3.8 applications on UBI 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 (UBI 7) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi7/python-38:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d84a0 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking tag {3.8-ubi8 map[description:Build and run Python 3.8 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. iconClass:icon-python openshift.io/display-name:Python 3.8 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.8,python tags:builder,python version:3.8] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-38:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d8540 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking tag {3.9-ubi8 map[description:Build and run Python 3.9 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.9/README.md. iconClass:icon-python openshift.io/display-name:Python 3.9 (UBI 8) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python:3.9,python tags:builder,python version:3.9] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/ubi8/python-39:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d85e0 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking tag {latest map[description:Build and run Python applications on UBI. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.9/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Python available on OpenShift, including major version updates. iconClass:icon-python openshift.io/display-name:Python (Latest) openshift.io/provider-display-name:Red Hat, Inc. sampleRepo:https://github.com/sclorg/django-ex.git supports:python tags:builder,python] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:3.9-ubi8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d86b0 {false false} {Local}} Oct 13 10:22:53.030: INFO: Checking language mysql Oct 13 10:22:53.044: INFO: Checking tag {8.0 map[description:Provides a MySQL 8.0 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 openshift.io/provider-display-name:Red Hat, Inc. tags:mysql,hidden version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/mysql-80-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d98c0 {false false} {Local}} Oct 13 10:22:53.044: INFO: Checking tag {8.0-el7 map[description:Provides a MySQL 8.0 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/mysql-80-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d9920 {false false} {Local}} Oct 13 10:22:53.044: INFO: Checking tag {8.0-el8 map[description:Provides a MySQL 8.0 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. iconClass:icon-mysql-database openshift.io/display-name:MySQL 8.0 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql version:8.0] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/mysql-80:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d9980 {false false} {Local}} Oct 13 10:22:53.044: INFO: Checking tag {latest map[description:Provides a MySQL database on RHEL. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/mysql-container/blob/master/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of MySQL available on OpenShift, including major version updates. iconClass:icon-mysql-database openshift.io/display-name:MySQL (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:mysql] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:8.0-el8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0021d9a10 {false false} {Local}} Oct 13 10:22:53.044: INFO: Checking language postgresql Oct 13 10:22:53.098: INFO: Checking tag {10 map[description:Provides a PostgreSQL 10 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Ephemeral) 10 openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql,hidden version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-10-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6820 {false false} {Local}} Oct 13 10:22:53.098: INFO: Checking tag {10-el7 map[description:Provides a PostgreSQL 10 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 10 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-10-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6890 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking tag {10-el8 map[description:Provides a PostgreSQL 10 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 10 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:10] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-10:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6900 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking tag {12 map[description:Provides a PostgreSQL 12 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Ephemeral) 12 openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql,hidden version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6970 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking tag {12-el7 map[description:Provides a PostgreSQL 12 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 12 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-12-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e69e0 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking tag {12-el8 map[description:Provides a PostgreSQL 12 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 12 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:12] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-12:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6a50 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking tag {13-el7 map[description:Provides a PostgreSQL 13 database on RHEL 7. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 13 (RHEL 7) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:13] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhscl/postgresql-13-rhel7:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6ac0 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking tag {13-el8 map[description:Provides a PostgreSQL 13 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 13 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:13] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-13:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6b30 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking tag {9.6-el8 map[description:Provides a PostgreSQL 9.6 database on RHEL 8. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL 9.6 (RHEL 8) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql version:9.6] &ObjectReference{Kind:DockerImage,Namespace:,Name:registry.redhat.io/rhel8/postgresql-96:latest,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6ba0 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking tag {latest map[description:Provides a PostgreSQL database on RHEL. For more information about using this database image, including OpenShift considerations, see https://github.com/sclorg/postgresql-container/blob/master/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of PostgreSQL available on OpenShift, including major version updates. iconClass:icon-postgresql openshift.io/display-name:PostgreSQL (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:database,postgresql] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:13-el8,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e6c28 {false false} {Local}} Oct 13 10:22:53.099: INFO: Checking language jenkins Oct 13 10:22:53.110: INFO: Checking tag {2 map[description:Provides a Jenkins 2.X server on RHEL. For more information about using this container image, including OpenShift considerations, see https://github.com/openshift/jenkins/blob/master/README.md. iconClass:icon-jenkins openshift.io/display-name:Jenkins 2.X openshift.io/provider-display-name:Red Hat, Inc. tags:jenkins version:2.x] &ObjectReference{Kind:DockerImage,Namespace:,Name:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba54e46eebfe50a572eb683ebc0960d5c682635e4640b480c7274bb9fa81e26,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e7d70 {false false} {Local}} Oct 13 10:22:53.110: INFO: Checking tag {latest map[description:Provides a Jenkins server on RHEL. For more information about using this container image, including OpenShift considerations, see https://github.com/openshift/jenkins/blob/master/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Jenkins available on OpenShift, including major versions updates. iconClass:icon-jenkins openshift.io/display-name:Jenkins (Latest) openshift.io/provider-display-name:Red Hat, Inc. tags:jenkins] &ObjectReference{Kind:ImageStreamTag,Namespace:,Name:2,UID:,APIVersion:,ResourceVersion:,FieldPath:,} false 0xc0024e7df8 {false false} {Local}} Oct 13 10:22:53.110: INFO: Success! [It] should succeed with a --name of 58 characters [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/builds/new_app.go:57 STEP: calling oc new-app Oct 13 10:22:53.110: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 new-app https://github.com/sclorg/nodejs-ex --name a234567890123456789012345678901234567890123456789012345678 --build-env=BUILD_LOGLEVEL=5' --> Found image 33ddc20 (5 weeks old) in image stream "openshift/nodejs" under tag "14-ubi8" for "nodejs" Node.js 14 ---------- Node.js 14 available as container is a base platform for building and running various Node.js 14 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Tags: builder, nodejs, nodejs14 * The source repository appears to match: nodejs * A source build using source code from https://github.com/sclorg/nodejs-ex will be created * The resulting image will be pushed to image stream tag "a234567890123456789012345678901234567890123456789012345678:latest" * Use 'oc start-build' to trigger a new build --> Creating resources ... imagestream.image.openshift.io "a234567890123456789012345678901234567890123456789012345678" created buildconfig.build.openshift.io "a234567890123456789012345678901234567890123456789012345678" created deployment.apps "a234567890123456789012345678901234567890123456789012345678" created service "a234567890123456789012345678901234567890123456789012345678" created --> Success Build scheduled, use 'oc logs -f buildconfig/a234567890123456789012345678901234567890123456789012345678' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/a234567890123456789012345678901234567890123456789012345678' Run 'oc status' to view your app. STEP: waiting for the build to complete Oct 13 10:25:01.947: INFO: WaitForABuild returning with error: The build "a234567890123456789012345678901234567890123456789012345678-1" status is "Failed" Oct 13 10:25:01.948: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs -f bc/a234567890123456789012345678901234567890123456789012345678 --timestamps' Oct 13 10:25:02.176: INFO: build logs : 2022-10-13T10:23:23.406744754Z I1013 10:23:23.406661 1 builder.go:393] openshift-builder 4.9.0-202210061647.p0.g1a32676.assembly.stream-1a32676 2022-10-13T10:23:23.406940329Z I1013 10:23:23.406922 1 builder.go:393] Powered by buildah v1.22.4 2022-10-13T10:23:23.415829487Z I1013 10:23:23.415787 1 builder.go:394] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} 2022-10-13T10:23:23.416849352Z Cloning "https://github.com/sclorg/nodejs-ex" ... 2022-10-13T10:23:23.416873736Z I1013 10:23:23.416849 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex 2022-10-13T10:23:23.416873736Z I1013 10:23:23.416865 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex 2022-10-13T10:23:39.417973933Z I1013 10:23:39.417875 1 repository.go:545] Command execution timed out after 16s 2022-10-13T10:23:39.418108783Z WARNING: timed out waiting for git server, will wait 1m4s2022-10-13T10:23:39.418149158Z 2022-10-13T10:23:39.418181835Z I1013 10:23:39.418170 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex 2022-10-13T10:23:39.418225370Z I1013 10:23:39.418214 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex 2022-10-13T10:23:59.503403201Z I1013 10:23:59.503331 1 repository.go:541] Error executing command: exit status 128 2022-10-13T10:23:59.503554375Z I1013 10:23:59.503536 1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com 2022-10-13T10:24:59.655751191Z error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com Oct 13 10:25:02.176: INFO: Dumping pod state for namespace e2e-test-new-app-wckrp Oct 13 10:25:02.176: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config get pods -o yaml' Oct 13 10:25:02.371: INFO: apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.165.125" ], "mac": "fa:16:3e:31:30:74", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.165.125" ], "mac": "fa:16:3e:31:30:74", "default": true, "dns": {} }] openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1 openshift.io/scc: privileged creationTimestamp: "2022-10-13T10:22:56Z" finalizers: - kuryr.openstack.org/pod-finalizer labels: openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1 name: a234567890123456789012345678901234567890123456789012345678-1-build namespace: e2e-test-new-app-wckrp ownerReferences: - apiVersion: build.openshift.io/v1 controller: true kind: Build name: a234567890123456789012345678901234567890123456789012345678-1 uid: e4b59e1a-94a3-4d33-a826-9b209b205ee1 resourceVersion: "955211" uid: cd09e5be-7847-4742-8f63-c558a46f2b21 spec: activeDeadlineSeconds: 604800 containers: - args: - openshift-sti-build - --loglevel=5 env: - name: BUILD value: | {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} - name: LANG value: C.utf8 - name: SOURCE_REPOSITORY value: https://github.com/sclorg/nodejs-ex - name: SOURCE_URI value: https://github.com/sclorg/nodejs-ex - name: BUILD_LOGLEVEL value: "5" - name: ALLOWED_UIDS value: 1- - name: DROP_CAPS value: KILL,MKNOD,SETGID,SETUID - name: PUSH_DOCKERCFG_PATH value: /var/run/secrets/openshift.io/push - name: PULL_DOCKERCFG_PATH value: /var/run/secrets/openshift.io/pull - name: BUILD_REGISTRIES_CONF_PATH value: /var/run/configs/openshift.io/build-system/registries.conf - name: BUILD_REGISTRIES_DIR_PATH value: /var/run/configs/openshift.io/build-system/registries.d - name: BUILD_SIGNATURE_POLICY_PATH value: /var/run/configs/openshift.io/build-system/policy.json - name: BUILD_STORAGE_CONF_PATH value: /var/run/configs/openshift.io/build-system/storage.conf - name: BUILD_STORAGE_DRIVER value: overlay - name: BUILD_BLOBCACHE_DIR value: /var/cache/blobs - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imagePullPolicy: IfNotPresent name: sti-build resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/lib/kubelet/config.json name: node-pullsecrets - mountPath: /tmp/build name: buildworkdir - mountPath: /var/lib/containers/cache name: buildcachedir - mountPath: /var/run/secrets/openshift.io/push name: builder-dockercfg-xsbfr-push readOnly: true - mountPath: /var/run/secrets/openshift.io/pull name: builder-dockercfg-xsbfr-pull readOnly: true - mountPath: /var/run/configs/openshift.io/build-system name: build-system-configs readOnly: true - mountPath: /var/run/configs/openshift.io/certs name: build-ca-bundles - mountPath: /var/run/configs/openshift.io/pki name: build-proxy-ca-bundles - mountPath: /var/lib/containers/storage name: container-storage-root - mountPath: /var/cache/blobs name: build-blob-cache - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lx97v readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: builder-dockercfg-xsbfr initContainers: - args: - openshift-git-clone - --loglevel=5 env: - name: BUILD value: | {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} - name: LANG value: C.utf8 - name: SOURCE_REPOSITORY value: https://github.com/sclorg/nodejs-ex - name: SOURCE_URI value: https://github.com/sclorg/nodejs-ex - name: BUILD_LOGLEVEL value: "5" - name: ALLOWED_UIDS value: 1- - name: DROP_CAPS value: KILL,MKNOD,SETGID,SETUID - name: BUILD_REGISTRIES_CONF_PATH value: /var/run/configs/openshift.io/build-system/registries.conf - name: BUILD_REGISTRIES_DIR_PATH value: /var/run/configs/openshift.io/build-system/registries.d - name: BUILD_SIGNATURE_POLICY_PATH value: /var/run/configs/openshift.io/build-system/policy.json - name: BUILD_STORAGE_CONF_PATH value: /var/run/configs/openshift.io/build-system/storage.conf - name: BUILD_BLOBCACHE_DIR value: /var/cache/blobs - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imagePullPolicy: IfNotPresent name: git-clone resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /tmp/build name: buildworkdir - mountPath: /var/run/configs/openshift.io/build-system name: build-system-configs readOnly: true - mountPath: /var/run/configs/openshift.io/certs name: build-ca-bundles - mountPath: /var/run/configs/openshift.io/pki name: build-proxy-ca-bundles - mountPath: /var/cache/blobs name: build-blob-cache - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lx97v readOnly: true - args: - openshift-manage-dockerfile - --loglevel=5 env: - name: BUILD value: | {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} - name: LANG value: C.utf8 - name: SOURCE_REPOSITORY value: https://github.com/sclorg/nodejs-ex - name: SOURCE_URI value: https://github.com/sclorg/nodejs-ex - name: BUILD_LOGLEVEL value: "5" - name: ALLOWED_UIDS value: 1- - name: DROP_CAPS value: KILL,MKNOD,SETGID,SETUID - name: BUILD_REGISTRIES_CONF_PATH value: /var/run/configs/openshift.io/build-system/registries.conf - name: BUILD_REGISTRIES_DIR_PATH value: /var/run/configs/openshift.io/build-system/registries.d - name: BUILD_SIGNATURE_POLICY_PATH value: /var/run/configs/openshift.io/build-system/policy.json - name: BUILD_STORAGE_CONF_PATH value: /var/run/configs/openshift.io/build-system/storage.conf - name: BUILD_BLOBCACHE_DIR value: /var/cache/blobs - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imagePullPolicy: IfNotPresent name: manage-dockerfile resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /tmp/build name: buildworkdir - mountPath: /var/run/configs/openshift.io/build-system name: build-system-configs readOnly: true - mountPath: /var/run/configs/openshift.io/certs name: build-ca-bundles - mountPath: /var/run/configs/openshift.io/pki name: build-proxy-ca-bundles - mountPath: /var/cache/blobs name: build-blob-cache - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lx97v readOnly: true nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: {} serviceAccount: builder serviceAccountName: builder terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - hostPath: path: /var/lib/kubelet/config.json type: File name: node-pullsecrets - hostPath: path: /var/lib/containers/cache type: "" name: buildcachedir - emptyDir: {} name: buildworkdir - name: builder-dockercfg-xsbfr-push secret: defaultMode: 384 secretName: builder-dockercfg-xsbfr - name: builder-dockercfg-xsbfr-pull secret: defaultMode: 384 secretName: builder-dockercfg-xsbfr - configMap: defaultMode: 420 name: a234567890123456789012345678901234567890123456789012345678-1-sys-config name: build-system-configs - configMap: defaultMode: 420 items: - key: service-ca.crt path: certs.d/image-registry.openshift-image-registry.svc:5000/ca.crt name: a234567890123456789012345678901234567890123456789012345678-1-ca name: build-ca-bundles - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: a234567890123456789012345678901234567890123456789012345678-1-global-ca name: build-proxy-ca-bundles - emptyDir: {} name: container-storage-root - emptyDir: {} name: build-blob-cache - name: kube-api-access-lx97v projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" message: 'containers with incomplete status: [git-clone manage-dockerfile]' reason: ContainersNotInitialized status: "False" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" message: 'containers with unready status: [sti-build]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" message: 'containers with unready status: [sti-build]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" status: "True" type: PodScheduled containerStatuses: - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imageID: "" lastState: {} name: sti-build ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing hostIP: 10.196.2.169 initContainerStatuses: - containerID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 lastState: {} name: git-clone ready: false restartCount: 0 state: terminated: containerID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0 exitCode: 1 finishedAt: "2022-10-13T10:24:59Z" message: | value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} Cloning "https://github.com/sclorg/nodejs-ex" ... I1013 10:23:23.416849 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:23.416865 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:39.417875 1 repository.go:545] Command execution timed out after 16s WARNING: timed out waiting for git server, will wait 1m4s I1013 10:23:39.418170 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:39.418214 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:59.503331 1 repository.go:541] Error executing command: exit status 128 I1013 10:23:59.503536 1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com reason: Error startedAt: "2022-10-13T10:23:23Z" - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imageID: "" lastState: {} name: manage-dockerfile ready: false restartCount: 0 state: waiting: reason: PodInitializing phase: Pending podIP: 10.128.165.125 podIPs: - ip: 10.128.165.125 qosClass: BestEffort startTime: "2022-10-13T10:22:56Z" kind: List metadata: resourceVersion: "" selfLink: "" [AfterEach] github.com/openshift/origin/test/extended/builds/new_app.go:47 Oct 13 10:25:02.372: INFO: Dumping pod state for namespace e2e-test-new-app-wckrp Oct 13 10:25:02.372: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config get pods -o yaml' Oct 13 10:25:02.555: INFO: apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.165.125" ], "mac": "fa:16:3e:31:30:74", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.165.125" ], "mac": "fa:16:3e:31:30:74", "default": true, "dns": {} }] openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1 openshift.io/scc: privileged creationTimestamp: "2022-10-13T10:22:56Z" finalizers: - kuryr.openstack.org/pod-finalizer labels: openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1 name: a234567890123456789012345678901234567890123456789012345678-1-build namespace: e2e-test-new-app-wckrp ownerReferences: - apiVersion: build.openshift.io/v1 controller: true kind: Build name: a234567890123456789012345678901234567890123456789012345678-1 uid: e4b59e1a-94a3-4d33-a826-9b209b205ee1 resourceVersion: "955279" uid: cd09e5be-7847-4742-8f63-c558a46f2b21 spec: activeDeadlineSeconds: 604800 containers: - args: - openshift-sti-build - --loglevel=5 env: - name: BUILD value: | {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} - name: LANG value: C.utf8 - name: SOURCE_REPOSITORY value: https://github.com/sclorg/nodejs-ex - name: SOURCE_URI value: https://github.com/sclorg/nodejs-ex - name: BUILD_LOGLEVEL value: "5" - name: ALLOWED_UIDS value: 1- - name: DROP_CAPS value: KILL,MKNOD,SETGID,SETUID - name: PUSH_DOCKERCFG_PATH value: /var/run/secrets/openshift.io/push - name: PULL_DOCKERCFG_PATH value: /var/run/secrets/openshift.io/pull - name: BUILD_REGISTRIES_CONF_PATH value: /var/run/configs/openshift.io/build-system/registries.conf - name: BUILD_REGISTRIES_DIR_PATH value: /var/run/configs/openshift.io/build-system/registries.d - name: BUILD_SIGNATURE_POLICY_PATH value: /var/run/configs/openshift.io/build-system/policy.json - name: BUILD_STORAGE_CONF_PATH value: /var/run/configs/openshift.io/build-system/storage.conf - name: BUILD_STORAGE_DRIVER value: overlay - name: BUILD_BLOBCACHE_DIR value: /var/cache/blobs - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imagePullPolicy: IfNotPresent name: sti-build resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/lib/kubelet/config.json name: node-pullsecrets - mountPath: /tmp/build name: buildworkdir - mountPath: /var/lib/containers/cache name: buildcachedir - mountPath: /var/run/secrets/openshift.io/push name: builder-dockercfg-xsbfr-push readOnly: true - mountPath: /var/run/secrets/openshift.io/pull name: builder-dockercfg-xsbfr-pull readOnly: true - mountPath: /var/run/configs/openshift.io/build-system name: build-system-configs readOnly: true - mountPath: /var/run/configs/openshift.io/certs name: build-ca-bundles - mountPath: /var/run/configs/openshift.io/pki name: build-proxy-ca-bundles - mountPath: /var/lib/containers/storage name: container-storage-root - mountPath: /var/cache/blobs name: build-blob-cache - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lx97v readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: builder-dockercfg-xsbfr initContainers: - args: - openshift-git-clone - --loglevel=5 env: - name: BUILD value: | {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} - name: LANG value: C.utf8 - name: SOURCE_REPOSITORY value: https://github.com/sclorg/nodejs-ex - name: SOURCE_URI value: https://github.com/sclorg/nodejs-ex - name: BUILD_LOGLEVEL value: "5" - name: ALLOWED_UIDS value: 1- - name: DROP_CAPS value: KILL,MKNOD,SETGID,SETUID - name: BUILD_REGISTRIES_CONF_PATH value: /var/run/configs/openshift.io/build-system/registries.conf - name: BUILD_REGISTRIES_DIR_PATH value: /var/run/configs/openshift.io/build-system/registries.d - name: BUILD_SIGNATURE_POLICY_PATH value: /var/run/configs/openshift.io/build-system/policy.json - name: BUILD_STORAGE_CONF_PATH value: /var/run/configs/openshift.io/build-system/storage.conf - name: BUILD_BLOBCACHE_DIR value: /var/cache/blobs - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imagePullPolicy: IfNotPresent name: git-clone resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /tmp/build name: buildworkdir - mountPath: /var/run/configs/openshift.io/build-system name: build-system-configs readOnly: true - mountPath: /var/run/configs/openshift.io/certs name: build-ca-bundles - mountPath: /var/run/configs/openshift.io/pki name: build-proxy-ca-bundles - mountPath: /var/cache/blobs name: build-blob-cache - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lx97v readOnly: true - args: - openshift-manage-dockerfile - --loglevel=5 env: - name: BUILD value: | {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} - name: LANG value: C.utf8 - name: SOURCE_REPOSITORY value: https://github.com/sclorg/nodejs-ex - name: SOURCE_URI value: https://github.com/sclorg/nodejs-ex - name: BUILD_LOGLEVEL value: "5" - name: ALLOWED_UIDS value: 1- - name: DROP_CAPS value: KILL,MKNOD,SETGID,SETUID - name: BUILD_REGISTRIES_CONF_PATH value: /var/run/configs/openshift.io/build-system/registries.conf - name: BUILD_REGISTRIES_DIR_PATH value: /var/run/configs/openshift.io/build-system/registries.d - name: BUILD_SIGNATURE_POLICY_PATH value: /var/run/configs/openshift.io/build-system/policy.json - name: BUILD_STORAGE_CONF_PATH value: /var/run/configs/openshift.io/build-system/storage.conf - name: BUILD_BLOBCACHE_DIR value: /var/cache/blobs - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imagePullPolicy: IfNotPresent name: manage-dockerfile resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /tmp/build name: buildworkdir - mountPath: /var/run/configs/openshift.io/build-system name: build-system-configs readOnly: true - mountPath: /var/run/configs/openshift.io/certs name: build-ca-bundles - mountPath: /var/run/configs/openshift.io/pki name: build-proxy-ca-bundles - mountPath: /var/cache/blobs name: build-blob-cache - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-lx97v readOnly: true nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: {} serviceAccount: builder serviceAccountName: builder terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - hostPath: path: /var/lib/kubelet/config.json type: File name: node-pullsecrets - hostPath: path: /var/lib/containers/cache type: "" name: buildcachedir - emptyDir: {} name: buildworkdir - name: builder-dockercfg-xsbfr-push secret: defaultMode: 384 secretName: builder-dockercfg-xsbfr - name: builder-dockercfg-xsbfr-pull secret: defaultMode: 384 secretName: builder-dockercfg-xsbfr - configMap: defaultMode: 420 name: a234567890123456789012345678901234567890123456789012345678-1-sys-config name: build-system-configs - configMap: defaultMode: 420 items: - key: service-ca.crt path: certs.d/image-registry.openshift-image-registry.svc:5000/ca.crt name: a234567890123456789012345678901234567890123456789012345678-1-ca name: build-ca-bundles - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: a234567890123456789012345678901234567890123456789012345678-1-global-ca name: build-proxy-ca-bundles - emptyDir: {} name: container-storage-root - emptyDir: {} name: build-blob-cache - name: kube-api-access-lx97v projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" message: 'containers with incomplete status: [git-clone manage-dockerfile]' reason: ContainersNotInitialized status: "False" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" message: 'containers with unready status: [sti-build]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" message: 'containers with unready status: [sti-build]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-13T10:22:56Z" status: "True" type: PodScheduled containerStatuses: - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imageID: "" lastState: {} name: sti-build ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing hostIP: 10.196.2.169 initContainerStatuses: - containerID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 lastState: {} name: git-clone ready: false restartCount: 0 state: terminated: containerID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0 exitCode: 1 finishedAt: "2022-10-13T10:24:59Z" message: | value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} Cloning "https://github.com/sclorg/nodejs-ex" ... I1013 10:23:23.416849 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:23.416865 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:39.417875 1 repository.go:545] Command execution timed out after 16s WARNING: timed out waiting for git server, will wait 1m4s I1013 10:23:39.418170 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:39.418214 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:59.503331 1 repository.go:541] Error executing command: exit status 128 I1013 10:23:59.503536 1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com reason: Error startedAt: "2022-10-13T10:23:23Z" - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 imageID: "" lastState: {} name: manage-dockerfile ready: false restartCount: 0 state: waiting: reason: PodInitializing phase: Failed podIP: 10.128.165.125 podIPs: - ip: 10.128.165.125 qosClass: BestEffort startTime: "2022-10-13T10:22:56Z" kind: List metadata: resourceVersion: "" selfLink: "" Oct 13 10:25:02.555: INFO: Dumping configMap state for namespace e2e-test-new-app-wckrp Oct 13 10:25:02.556: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config get configmaps -o yaml' Oct 13 10:25:02.745: INFO: apiVersion: v1 items: - apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- MIIDUTCCAjmgAwIBAgIIWqQHBq17DxYwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTY2NTUwNDg0ODAe Fw0yMjEwMTExNjE0MDhaFw0yNDEyMDkxNjE0MDlaMDYxNDAyBgNVBAMMK29wZW5z aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE2NjU1MDQ4NDgwggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCnQ7kRVFI9BQbx1ViDxaiQ0OxvNHomJEpt HoOQ4O+2U28imqMZoMPQH172nxIpxyNufn/4ObLXEBqNshYRcWv6p16GPLAXxYP2 C4K4H8jQKGPFdtcoe8feeCuWlCghi9AHCa5/pzGK94eDF/hLrsf6zQ+iGx+3FqRf 9m8CqbGdPkvRzWkbX/cNgIAE2SkEfB1jEiygA0kNmQ0lDN0yOoKUwm3UhOBRCr3m mwnYpHWlDQ4anvKKGaz6iqjhn8MZEUXg0b6SpplH/oRko+vqPLYbcxx19Etz7e02 k7866xfEz8Upw/rq/rfjGqbx0p8WIwmngG1JowbAOdNc4We0mfPZAgMBAAGjYzBh MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTKL313 5EZX7D2w6+wXudOGBxB6STAfBgNVHSMEGDAWgBTKL3135EZX7D2w6+wXudOGBxB6 STANBgkqhkiG9w0BAQsFAAOCAQEAGlUnIqdKOpkqrBgCBIBJxJq8WdZeGwTWVHAn 6LFPsHVSpV8b50ENOQzkrmyL2CM1JPGUFHvUr81pRT7IKKlNa7Gi8f5aUlyg/wc3 tmYB9PyO7KU3EkVxU7KfzCtMYHu/2H0PNeSTKVzgyLA4V7pEZDvCwhOjfKkerVvM CmVoo8XwgTmARM3nNCKQ3Yap0OGU388CmvuRfFkdh1i11xzs34CHIOER+JYSqV5e zVCHpEDuUG/yE0pf4XeqchIv3rCWyt1J5egkSMlBHP9Zhb+IVcd8nIA4kSBijRjB MYGk7eVOXTTBTiuzt2rBlStjWvtjHspLyTbbObqbtrAdv92YfQ== -----END CERTIFICATE----- kind: ConfigMap metadata: creationTimestamp: "2022-10-13T10:22:56Z" name: a234567890123456789012345678901234567890123456789012345678-1-ca namespace: e2e-test-new-app-wckrp ownerReferences: - apiVersion: v1 kind: Pod name: a234567890123456789012345678901234567890123456789012345678-1-build uid: cd09e5be-7847-4742-8f63-c558a46f2b21 resourceVersion: "951626" uid: 63de9da3-3fef-4f31-9152-5b13dcd95571 - apiVersion: v1 data: ca-bundle.crt: "" kind: ConfigMap metadata: creationTimestamp: "2022-10-13T10:22:56Z" name: a234567890123456789012345678901234567890123456789012345678-1-global-ca namespace: e2e-test-new-app-wckrp ownerReferences: - apiVersion: v1 kind: Pod name: a234567890123456789012345678901234567890123456789012345678-1-build uid: cd09e5be-7847-4742-8f63-c558a46f2b21 resourceVersion: "951635" uid: 0926106b-b07c-4664-bdfc-a3d3946485ba - apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: "2022-10-13T10:22:56Z" name: a234567890123456789012345678901234567890123456789012345678-1-sys-config namespace: e2e-test-new-app-wckrp ownerReferences: - apiVersion: v1 kind: Pod name: a234567890123456789012345678901234567890123456789012345678-1-build uid: cd09e5be-7847-4742-8f63-c558a46f2b21 resourceVersion: "951631" uid: f5d615ad-2e65-4465-a2e9-b00d5dfc8761 - apiVersion: v1 data: ca.crt: | -----BEGIN CERTIFICATE----- MIIDMjCCAhqgAwIBAgIILN1CKhOBc2UwDQYJKoZIhvcNAQELBQAwNzESMBAGA1UE CxMJb3BlbnNoaWZ0MSEwHwYDVQQDExhrdWJlLWFwaXNlcnZlci1sYi1zaWduZXIw HhcNMjIxMDExMTYwMjIzWhcNMzIxMDA4MTYwMjIzWjA3MRIwEAYDVQQLEwlvcGVu c2hpZnQxITAfBgNVBAMTGGt1YmUtYXBpc2VydmVyLWxiLXNpZ25lcjCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBANuVs0Z9M+eZOvZAbxX1JEXhGJ7cFlW+ q1ZHT9zSgI6Riga/Jw/NjL+kjnhxsqz3ez/aDsva2zPmXaOZ2FjW7peUOMh089n0 n5WbEB0tBNCZCBOpXvWu3/2wqfLfa8hl+YpbU+pQvO7mXqMdrIzinJpLbl20HlfA jlhTWSGAPqZft4hJzjel2SZiIUlCnp7FrEG42JFxREExuSkoPLhWRC0xfFB5pA9V JklEsBVb23M4Vti/BfwukvAiplx2X69+Qc9fXm7i+L45eSc9yQss5X67/1z7RsPa n3708K8JGFeXYuJ6nYQooQbhj3cvxtY31TPxIKcQE1FJa0Qmft+VYZkCAwEAAaNC MEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJlv mLJKYamTvm9Ks5bqMTNNbuFwMA0GCSqGSIb3DQEBCwUAA4IBAQCMEXtW2kb4gCyF NqW2f5ABK+9eMe9MjGUNYDY2kdYMwiw/nz89kwt/a3Ck5mTHnZIENNjTkYdv2wTC DFFCXQJFbSqyCpfEaTuCRpsBM4sZJrZdpjW74aqo7KwyQ3Gm9fClJuGfa2QF/gWU v7QF/8u732NVWC6DUUzu6xBMrTDnOjtKeMJ5PvfUpZv9u/RvWmkHBpQZfroBvuDy 8PDJUjgJj0k/gIXljO3K9yLUHw76lKimmXdn5JR/UjZasQVY3t5FMDt1No6VjpLt 811ELzxHsYsrzbeKlzBbZko1EIhIV9b5DXmykivnucJJC6gNrXnd4RMp/yHrdluN e5IpzDw7 -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDQDCCAiigAwIBAgIICo9mBwuOce4wDQYJKoZIhvcNAQELBQAwPjESMBAGA1UE CxMJb3BlbnNoaWZ0MSgwJgYDVQQDEx9rdWJlLWFwaXNlcnZlci1sb2NhbGhvc3Qt c2lnbmVyMB4XDTIyMTAxMTE2MDIyMloXDTMyMTAwODE2MDIyMlowPjESMBAGA1UE CxMJb3BlbnNoaWZ0MSgwJgYDVQQDEx9rdWJlLWFwaXNlcnZlci1sb2NhbGhvc3Qt c2lnbmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqeSZnR3XSMrI As3vxbqT8KadC2vLa1Sv5VnnMnEaMzuJ0R0AwIgLDOVhNQKMN6KKnrHcdXhBuBT9 kSgSKp4zlw65L7Eomgz2pGTqXrSL06xaXaxUXt7XxqDwEBEEueTacjSEkFbuSVLs x9alZYzg9ExhAz7za665/03tTEa+4bglAwqnw7/3xEauH7tyP+d3niLSewwXg8UF JtxZ7CHMKy/afV9+q61I6ULkj+V+Lt9eo11ucYTnJzmlGEac/n7fLj++lFwiafzq GxamgCaXBo6INUpX/8x2KZemHEXMYMRnsNHRmXjZi7PJIEP4doPxWEDS6reuS0P5 urUkyOHfAQIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB /zAdBgNVHQ4EFgQUP6qELERdYc51gPE2PEiS/skbuEcwDQYJKoZIhvcNAQELBQAD ggEBABzoKx1Od3m2Koc5+g4SAFZT1+1LYBC8c+ew3v9mizzH6X5kXopdJkFZtHEN GBnd8Dlmjwu+DBppYWBvTz1/hC2+pZSVO4lbEWHeRB28unvzRfdT49OtADyCi0b4 +Mr4C8BYb9FnfPXrMK1o7a8TW+NiV+Q5jeNnWSgqohV0U6peSFtHLWkfm3jF7xLL FrWPxiISIz37nPIIDdUrlNPVaNAI1kdynxC58faJJXfO+wWn/7ShvglL+sYhnL+K Fh2Nbqv6p+hBHLJ2BOLQNwuGDv2LNZ+/hHUCboDaSEBh0AhTiGYzLWvtMeF6WGGI HyS+I56cBeKvPQzlFdone09rvqo= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDTDCCAjSgAwIBAgIIOwJx6MDGIWYwDQYJKoZIhvcNAQELBQAwRDESMBAGA1UE CxMJb3BlbnNoaWZ0MS4wLAYDVQQDEyVrdWJlLWFwaXNlcnZlci1zZXJ2aWNlLW5l dHdvcmstc2lnbmVyMB4XDTIyMTAxMTE2MDIyMloXDTMyMTAwODE2MDIyMlowRDES MBAGA1UECxMJb3BlbnNoaWZ0MS4wLAYDVQQDEyVrdWJlLWFwaXNlcnZlci1zZXJ2 aWNlLW5ldHdvcmstc2lnbmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC AQEAsC+Rrx7G1shNCywb0QxGuLYAzSoo3ML6l2KVR9NHydMQBDiOFd0+Sc7mczzu DoA70JPRyApjCm2QsZ1hNGV4WvDYzYemVQJgN1h8ogooohJNGieN9fnkfTiG96Sz 0klaylWtr2WF0W6zyDMjT9DaRdQl9Th1lNBUFF3cwY+XIzzSZdS1ErUj1H6rzcdh HDoLmsuKkU9iQXDaOEhZ6xVEEF0P9Ich9PhsDjut6mmyC+bAOMNd+nqgzeX1JCC/ wlEhSV6TWIhxj5N8Ug/lsevxtq0HQLMaBowCmjBzuvc93WfndxGzcWFKqjNq5ZMW j8qbGel+3n0buQrjsE8384bAbwIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAqQwDwYD VR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUBWOF9EVp9ugxbTYWOonVZLpqHjUwDQYJ KoZIhvcNAQELBQADggEBAIoS1fo2hRMp0iBRzIkl7B6ELDmWl7t6lZVp9qxYgbk+ O5eBuuh5b4ZDKwFt74IlvLvXJTESGMrEPo47hf+FmJPbqrBx3Dc4OsTwkhVwmdzb CfEUzCYtVV2lKOH5EeMG6lb5wbTznYl/W0Vh4qZ6qNSRPwwSeMf0OWtdXu89QEm5 F5T6GVlSZXBqs1AzuljEbBa9i/ExAenOQBqWow0JeTkWV1AgngIOh5+wBSOHYeaD 154r0GVaDixcRvB1KC+QzOyHzSUkjlnKzzsY09qiY2Ne6PfXDLm6TCzI6vqtUM19 dK/uFHtl/UwN9BreR7iElcZUr+c8U8lSFOSm66JmkeI= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDlzCCAn+gAwIBAgIIfks7M1UA4OowDQYJKoZIhvcNAQELBQAwWTFXMFUGA1UE AwxOb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2xvY2FsaG9zdC1y ZWNvdmVyeS1zZXJ2aW5nLXNpZ25lckAxNjY1NTA0ODk3MB4XDTIyMTAxMTE2MTQ1 N1oXDTMyMTAwODE2MTQ1OFowWTFXMFUGA1UEAwxOb3BlbnNoaWZ0LWt1YmUtYXBp c2VydmVyLW9wZXJhdG9yX2xvY2FsaG9zdC1yZWNvdmVyeS1zZXJ2aW5nLXNpZ25l ckAxNjY1NTA0ODk3MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA70nx R0LL9lcuXjtZoAIdPQBb4pHxv2d2ClCxNsWTnQYiMPL6xUlDXLrzLeM21dsmHi7h Kmsxfyk/dkXIO5v8j1EA52L0hMUTVaxxisZo9WCAimDuwIhkDffhYKyXxztB75A5 OheKWWdq+HioM3cDhRZi9ifPv10PfPpKPK660bCOzQDJXnvrgI8P3OdjCILzu0ZL GVJiqFJX8gHt+I7EaWRsZZmomhmwdg28j/MevgYoF91aTXK9skbaEEjABtgytRqQ udTM1lS8G6A/ezOEkobJxKk65FQ9Gld0Wc36BVA85v+EiXK7selhHTozueo34nLP gwRJUU11Pw2PI6vyfwIDAQABo2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/ BAUwAwEB/zAdBgNVHQ4EFgQUybhbyl062rBbI8U++BRyn6Ufx1kwHwYDVR0jBBgw FoAUybhbyl062rBbI8U++BRyn6Ufx1kwDQYJKoZIhvcNAQELBQADggEBAD8ZXhK4 7GJLcjRCTNFCuOoZoxniIFePyz+vywNk+nVADNbWHsbTYPr5lrdqNumzop7uQhj5 m0gBnEq9WFQvf8aYrkm3Y+qxs8+MyioshINFzNIej3EcE1qBmh84IjiHE9YWjYCe WKKNMRZopFx9ZAY3Qky8zgAPKKE8P7xTvHdNKV8T80qgei74D810niig8rwmthOU KcDbcigPykla3bJ3hEQCQI0Y0xLzptEZMb8jlSVlfVx/WAuyfVnPSRBHwyey3gpQ sXuMng2EzLIaODEuoRRHgTEfqRT1d20+rCXz/XQTsCHjtn3Yx6Nu44FO6oTm1sAb XQOxjoXGgUv7o2M= -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDbzCCAlegAwIBAgIIY75bKNpoEAEwDQYJKoZIhvcNAQELBQAwJjEkMCIGA1UE AwwbaW5ncmVzcy1vcGVyYXRvckAxNjY1NTA1MDM5MB4XDTIyMTAxMTE2MTg1N1oX DTI0MTAxMDE2MTg1OFowJzElMCMGA1UEAwwcKi5hcHBzLm9zdGVzdC5zaGlmdHN0 YWNrLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKJA0zaaSN20 Q5BTuruRbaGcTbybOdVWiYrmi8PrgXnk8obLF4W4Bmtsb/wpdc5M5BAP/rZtl4WF FlAfynzuPWlEIbMwgfFlKVG7l1gWWGmUvUnSev713+dfEQyFSgKYVH/AxkpzOn1f dONQ6vJ4QzmKAUpm7Bp00SuVvY0UL1+5jzv1SVpohyJ4UmYQuOOpjkMPoJYqLPNF cM6U910MyqViK7UH0NyNMB0Mh19byJvBlhfRLHw7Fvw+sPtnQN7iabTIHphaSrZI tDdFzLLtf+PMbLl6w5k18ZicH9J5EPyPuDz/zLkMDKaSpTr8CsCzwyMceM9IwTBC TDcIU8C8fH8CAwEAAaOBnzCBnDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYI KwYBBQUHAwEwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUNSJv6olBlRnaqXAwdaZy sp7dGLMwHwYDVR0jBBgwFoAUD1SAeJJkWGq+U06gBT1344dhVlgwJwYDVR0RBCAw HoIcKi5hcHBzLm9zdGVzdC5zaGlmdHN0YWNrLmNvbTANBgkqhkiG9w0BAQsFAAOC AQEAj/YFuJJPU3E/VansQjzpWhFVOjbaplfaYn1gvsEyokQnuxAAOzAfqvjnEHrU xVVJV13ckcjJ7VIUUy5wGf7CgJRLXPbjJBtOBDm2WyIf0qULQKG+tJ67+eh81BWq DnIrpL8QbiPzl9ufkbQCTifeli2yPiyNepn5d4b+RdhGVPS9sLZiU3SBqa5Tavtl T/HNrqWf+0F/yTtmIKs00d5lN5+/8bJcds2S4g9C2dqeIMLZnmVTgD1H9Ky17B1J /SRnHd1THpQ3HiCg/aPzlyT2S9kswzzo0DA8WFtuD1pbMeERPWu0gSJtUmGu+htr 3HAqITRplOUs+7rAvSG/ZbRyaQ== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDDDCCAfSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtpbmdy ZXNzLW9wZXJhdG9yQDE2NjU1MDUwMzkwHhcNMjIxMDExMTYxNzE4WhcNMjQxMDEw MTYxNzE5WjAmMSQwIgYDVQQDDBtpbmdyZXNzLW9wZXJhdG9yQDE2NjU1MDUwMzkw ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDZk9YqsZXxy/YkoT+RarcI Ko20B7xhiThks1rVncJ2HBUo8V3hurUO5tOrAAbIeMYj/GzdllCciTAhgpV65lGg GwklkBuRSp8rhqrsqpePoNbyLiHg97Pv5PDcrpwfvVBd3kPPQhgpWNaNIctNBQMD fSBQqbW+Qq0/mOcqVRmew9LRr9VDY/FH9mjk1s5kp/d7YdpveTf7o9Ay6tW/Jmm+ An8CteDngHcDT03etReUOZvhSb9yt52Wry8uisfdmZmNZ0ZMNSVJWctTWjSsknhW 1gHpDPWNlz7DKYrzjaKt5U2WYmQ7gNeZ4MOJHzx5FNvjc9y3oDYN/WKQxbQ/dAdN AgMBAAGjRTBDMA4GA1UdDwEB/wQEAwICpDASBgNVHRMBAf8ECDAGAQH/AgEAMB0G A1UdDgQWBBQPVIB4kmRYar5TTqAFPXfjh2FWWDANBgkqhkiG9w0BAQsFAAOCAQEA VscU7ev2DCrEl8qxDhgqCZesY+i2HmQPS6lMm/kvwpXskDnSJtt5y9WJrY0OnOdc W2MDcDSbMckZ8ripMFPIfETtuCCAJTnkGa31eNOB4VvqeTf0LDJtK/zAUVKDvd8K Yc3dDeutLpwAJwwSLeQrEw2FTVfWp4RY82OqHiXvoihIYlTSfmgrMMXylPpCHY+l ZvC144hMh/TV3W+xyJmh0EQ3LBE4zLqFv2ysyQ4o6lhwdmFPAmEJ37oc6tb3ZKQA VpfACCP/POIw45BPmeBkggEw9KjpLyB1K1G8wvDgeOTSBTK7in801xsA9ckosS7F a3dfOThY2ElYs2djq3Dr1w== -----END CERTIFICATE----- kind: ConfigMap metadata: annotations: kubernetes.io/description: Contains a CA bundle that can be used to verify the kube-apiserver when using internal endpoints such as the internal service IP or kubernetes.default.svc. No other usage is guaranteed across distributions of Kubernetes clusters. creationTimestamp: "2022-10-13T10:22:39Z" name: kube-root-ca.crt namespace: e2e-test-new-app-wckrp resourceVersion: "950749" uid: fcf82760-b2a0-47b3-8ed1-b4cee9f636a3 - apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- MIIDUTCCAjmgAwIBAgIIWqQHBq17DxYwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTY2NTUwNDg0ODAe Fw0yMjEwMTExNjE0MDhaFw0yNDEyMDkxNjE0MDlaMDYxNDAyBgNVBAMMK29wZW5z aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE2NjU1MDQ4NDgwggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCnQ7kRVFI9BQbx1ViDxaiQ0OxvNHomJEpt HoOQ4O+2U28imqMZoMPQH172nxIpxyNufn/4ObLXEBqNshYRcWv6p16GPLAXxYP2 C4K4H8jQKGPFdtcoe8feeCuWlCghi9AHCa5/pzGK94eDF/hLrsf6zQ+iGx+3FqRf 9m8CqbGdPkvRzWkbX/cNgIAE2SkEfB1jEiygA0kNmQ0lDN0yOoKUwm3UhOBRCr3m mwnYpHWlDQ4anvKKGaz6iqjhn8MZEUXg0b6SpplH/oRko+vqPLYbcxx19Etz7e02 k7866xfEz8Upw/rq/rfjGqbx0p8WIwmngG1JowbAOdNc4We0mfPZAgMBAAGjYzBh MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTKL313 5EZX7D2w6+wXudOGBxB6STAfBgNVHSMEGDAWgBTKL3135EZX7D2w6+wXudOGBxB6 STANBgkqhkiG9w0BAQsFAAOCAQEAGlUnIqdKOpkqrBgCBIBJxJq8WdZeGwTWVHAn 6LFPsHVSpV8b50ENOQzkrmyL2CM1JPGUFHvUr81pRT7IKKlNa7Gi8f5aUlyg/wc3 tmYB9PyO7KU3EkVxU7KfzCtMYHu/2H0PNeSTKVzgyLA4V7pEZDvCwhOjfKkerVvM CmVoo8XwgTmARM3nNCKQ3Yap0OGU388CmvuRfFkdh1i11xzs34CHIOER+JYSqV5e zVCHpEDuUG/yE0pf4XeqchIv3rCWyt1J5egkSMlBHP9Zhb+IVcd8nIA4kSBijRjB MYGk7eVOXTTBTiuzt2rBlStjWvtjHspLyTbbObqbtrAdv92YfQ== -----END CERTIFICATE----- kind: ConfigMap metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" creationTimestamp: "2022-10-13T10:22:39Z" name: openshift-service-ca.crt namespace: e2e-test-new-app-wckrp resourceVersion: "950761" uid: b9f8476c-e2d5-4c9a-879e-0f67d104c4a2 kind: List metadata: resourceVersion: "" selfLink: "" Oct 13 10:25:02.794: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config describe pod/a234567890123456789012345678901234567890123456789012345678-1-build -n e2e-test-new-app-wckrp' Oct 13 10:25:03.024: INFO: Describing pod "a234567890123456789012345678901234567890123456789012345678-1-build" Name: a234567890123456789012345678901234567890123456789012345678-1-build Namespace: e2e-test-new-app-wckrp Priority: 0 Node: ostest-n5rnf-worker-0-94fxs/10.196.2.169 Start Time: Thu, 13 Oct 2022 10:22:56 +0000 Labels: openshift.io/build.name=a234567890123456789012345678901234567890123456789012345678-1 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.165.125" ], "mac": "fa:16:3e:31:30:74", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.165.125" ], "mac": "fa:16:3e:31:30:74", "default": true, "dns": {} }] openshift.io/build.name: a234567890123456789012345678901234567890123456789012345678-1 openshift.io/scc: privileged Status: Failed IP: 10.128.165.125 IPs: IP: 10.128.165.125 Controlled By: Build/a234567890123456789012345678901234567890123456789012345678-1 Init Containers: git-clone: Container ID: cri-o://916fa938e9ae3fb68ac6af70a7af9cb0a1471052443397900767a8e9817f04b0 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 Port: <none> Host Port: <none> Args: openshift-git-clone --loglevel=5 State: Terminated Reason: Error Message: value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} Cloning "https://github.com/sclorg/nodejs-ex" ... I1013 10:23:23.416849 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:23.416865 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:39.417875 1 repository.go:545] Command execution timed out after 16s WARNING: timed out waiting for git server, will wait 1m4s I1013 10:23:39.418170 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:39.418214 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:59.503331 1 repository.go:541] Error executing command: exit status 128 I1013 10:23:59.503536 1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com Exit Code: 1 Started: Thu, 13 Oct 2022 10:23:23 +0000 Finished: Thu, 13 Oct 2022 10:24:59 +0000 Ready: False Restart Count: 0 Environment: BUILD: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} LANG: C.utf8 SOURCE_REPOSITORY: https://github.com/sclorg/nodejs-ex SOURCE_URI: https://github.com/sclorg/nodejs-ex BUILD_LOGLEVEL: 5 ALLOWED_UIDS: 1- DROP_CAPS: KILL,MKNOD,SETGID,SETUID BUILD_REGISTRIES_CONF_PATH: /var/run/configs/openshift.io/build-system/registries.conf BUILD_REGISTRIES_DIR_PATH: /var/run/configs/openshift.io/build-system/registries.d BUILD_SIGNATURE_POLICY_PATH: /var/run/configs/openshift.io/build-system/policy.json BUILD_STORAGE_CONF_PATH: /var/run/configs/openshift.io/build-system/storage.conf BUILD_BLOBCACHE_DIR: /var/cache/blobs HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /tmp/build from buildworkdir (rw) /var/cache/blobs from build-blob-cache (rw) /var/run/configs/openshift.io/build-system from build-system-configs (ro) /var/run/configs/openshift.io/certs from build-ca-bundles (rw) /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx97v (ro) manage-dockerfile: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 Image ID: Port: <none> Host Port: <none> Args: openshift-manage-dockerfile --loglevel=5 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: BUILD: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} LANG: C.utf8 SOURCE_REPOSITORY: https://github.com/sclorg/nodejs-ex SOURCE_URI: https://github.com/sclorg/nodejs-ex BUILD_LOGLEVEL: 5 ALLOWED_UIDS: 1- DROP_CAPS: KILL,MKNOD,SETGID,SETUID BUILD_REGISTRIES_CONF_PATH: /var/run/configs/openshift.io/build-system/registries.conf BUILD_REGISTRIES_DIR_PATH: /var/run/configs/openshift.io/build-system/registries.d BUILD_SIGNATURE_POLICY_PATH: /var/run/configs/openshift.io/build-system/policy.json BUILD_STORAGE_CONF_PATH: /var/run/configs/openshift.io/build-system/storage.conf BUILD_BLOBCACHE_DIR: /var/cache/blobs HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /tmp/build from buildworkdir (rw) /var/cache/blobs from build-blob-cache (rw) /var/run/configs/openshift.io/build-system from build-system-configs (ro) /var/run/configs/openshift.io/certs from build-ca-bundles (rw) /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx97v (ro) Containers: sti-build: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917 Image ID: Port: <none> Host Port: <none> Args: openshift-sti-build --loglevel=5 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: BUILD: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} LANG: C.utf8 SOURCE_REPOSITORY: https://github.com/sclorg/nodejs-ex SOURCE_URI: https://github.com/sclorg/nodejs-ex BUILD_LOGLEVEL: 5 ALLOWED_UIDS: 1- DROP_CAPS: KILL,MKNOD,SETGID,SETUID PUSH_DOCKERCFG_PATH: /var/run/secrets/openshift.io/push PULL_DOCKERCFG_PATH: /var/run/secrets/openshift.io/pull BUILD_REGISTRIES_CONF_PATH: /var/run/configs/openshift.io/build-system/registries.conf BUILD_REGISTRIES_DIR_PATH: /var/run/configs/openshift.io/build-system/registries.d BUILD_SIGNATURE_POLICY_PATH: /var/run/configs/openshift.io/build-system/policy.json BUILD_STORAGE_CONF_PATH: /var/run/configs/openshift.io/build-system/storage.conf BUILD_STORAGE_DRIVER: overlay BUILD_BLOBCACHE_DIR: /var/cache/blobs HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /tmp/build from buildworkdir (rw) /var/cache/blobs from build-blob-cache (rw) /var/lib/containers/cache from buildcachedir (rw) /var/lib/containers/storage from container-storage-root (rw) /var/lib/kubelet/config.json from node-pullsecrets (rw) /var/run/configs/openshift.io/build-system from build-system-configs (ro) /var/run/configs/openshift.io/certs from build-ca-bundles (rw) /var/run/configs/openshift.io/pki from build-proxy-ca-bundles (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx97v (ro) /var/run/secrets/openshift.io/pull from builder-dockercfg-xsbfr-pull (ro) /var/run/secrets/openshift.io/push from builder-dockercfg-xsbfr-push (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: node-pullsecrets: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/config.json HostPathType: File buildcachedir: Type: HostPath (bare host directory volume) Path: /var/lib/containers/cache HostPathType: buildworkdir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> builder-dockercfg-xsbfr-push: Type: Secret (a volume populated by a Secret) SecretName: builder-dockercfg-xsbfr Optional: false builder-dockercfg-xsbfr-pull: Type: Secret (a volume populated by a Secret) SecretName: builder-dockercfg-xsbfr Optional: false build-system-configs: Type: ConfigMap (a volume populated by a ConfigMap) Name: a234567890123456789012345678901234567890123456789012345678-1-sys-config Optional: false build-ca-bundles: Type: ConfigMap (a volume populated by a ConfigMap) Name: a234567890123456789012345678901234567890123456789012345678-1-ca Optional: false build-proxy-ca-bundles: Type: ConfigMap (a volume populated by a ConfigMap) Name: a234567890123456789012345678901234567890123456789012345678-1-global-ca Optional: false container-storage-root: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> build-blob-cache: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-lx97v: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m6s default-scheduler Successfully assigned e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678-1-build to ostest-n5rnf-worker-0-94fxs Normal AddedInterface 101s multus Add eth0 [10.128.165.125/23] from kuryr Normal Pulled 100s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine Normal Created 100s kubelet Created container git-clone Normal Started 100s kubelet Started container git-clone Oct 13 10:25:03.024: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c git-clone -n e2e-test-new-app-wckrp' Oct 13 10:25:03.276: INFO: Log for pod "a234567890123456789012345678901234567890123456789012345678-1-build"/"git-clone" ----> I1013 10:23:23.406661 1 builder.go:393] openshift-builder 4.9.0-202210061647.p0.g1a32676.assembly.stream-1a32676 I1013 10:23:23.406922 1 builder.go:393] Powered by buildah v1.22.4 I1013 10:23:23.415787 1 builder.go:394] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"a234567890123456789012345678901234567890123456789012345678-1","namespace":"e2e-test-new-app-wckrp","uid":"e4b59e1a-94a3-4d33-a826-9b209b205ee1","resourceVersion":"951609","generation":1,"creationTimestamp":"2022-10-13T10:22:55Z","labels":{"app":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/component":"a234567890123456789012345678901234567890123456789012345678","app.kubernetes.io/instance":"a234567890123456789012345678901234567890123456789012345678","buildconfig":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"a234567890123456789012345678901234567890123456789012345678","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"a234567890123456789012345678901234567890123456789012345678","uid":"8c3cd7bb-b916-4463-ad89-5bef6da3bd66","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2022-10-13T10:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c3cd7bb-b916-4463-ad89-5bef6da3bd66\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:git":{".":{},"f:uri":{}},"f:type":{}},"f:strategy":{"f:sourceStrategy":{".":{},"f:env":{},"f:from":{},"f:pullSecret":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/sclorg/nodejs-ex"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed"},"pullSecret":{"name":"builder-dockercfg-xsbfr"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest"},"pushSecret":{"name":"builder-dockercfg-xsbfr"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"image-registry.openshift-image-registry.svc:5000/openshift/nodejs@sha256:15542fdb8f9b9ac0afacdaea7b5f0b467ff0d39d0ebd5f95e5e43f2a9da314ed","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"nodejs:14-ubi8"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678:latest","config":{"kind":"BuildConfig","namespace":"e2e-test-new-app-wckrp","name":"a234567890123456789012345678901234567890123456789012345678"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2022-10-13T10:22:55Z","lastTransitionTime":"2022-10-13T10:22:55Z"}]}} Cloning "https://github.com/sclorg/nodejs-ex" ... I1013 10:23:23.416849 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:23.416865 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:39.417875 1 repository.go:545] Command execution timed out after 16s WARNING: timed out waiting for git server, will wait 1m4s I1013 10:23:39.418170 1 source.go:237] git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:39.418214 1 repository.go:450] Executing git ls-remote --heads https://github.com/sclorg/nodejs-ex I1013 10:23:59.503331 1 repository.go:541] Error executing command: exit status 128 I1013 10:23:59.503536 1 source.go:237] fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com error: fatal: unable to access 'https://github.com/sclorg/nodejs-ex/': Could not resolve host: github.com <----end of log for "a234567890123456789012345678901234567890123456789012345678-1-build"/"git-clone" Oct 13 10:25:03.277: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c manage-dockerfile -n e2e-test-new-app-wckrp' Oct 13 10:25:03.490: INFO: Error running /usr/local/bin/oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c manage-dockerfile -n e2e-test-new-app-wckrp: StdOut> Error from server (BadRequest): container "manage-dockerfile" in pod "a234567890123456789012345678901234567890123456789012345678-1-build" is waiting to start: PodInitializing StdErr> Error from server (BadRequest): container "manage-dockerfile" in pod "a234567890123456789012345678901234567890123456789012345678-1-build" is waiting to start: PodInitializing Oct 13 10:25:03.490: INFO: Error retrieving logs for pod "a234567890123456789012345678901234567890123456789012345678-1-build"/"manage-dockerfile": exit status 1 Oct 13 10:25:03.490: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c sti-build -n e2e-test-new-app-wckrp' Oct 13 10:25:03.692: INFO: Error running /usr/local/bin/oc --namespace=e2e-test-new-app-wckrp --kubeconfig=.kube/config logs pod/a234567890123456789012345678901234567890123456789012345678-1-build -c sti-build -n e2e-test-new-app-wckrp: StdOut> Error from server (BadRequest): container "sti-build" in pod "a234567890123456789012345678901234567890123456789012345678-1-build" is waiting to start: PodInitializing StdErr> Error from server (BadRequest): container "sti-build" in pod "a234567890123456789012345678901234567890123456789012345678-1-build" is waiting to start: PodInitializing Oct 13 10:25:03.692: INFO: Error retrieving logs for pod "a234567890123456789012345678901234567890123456789012345678-1-build"/"sti-build": exit status 1 Oct 13 10:25:03.692: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 get dc/a234567890123456789012345678901234567890123456789012345678 -o yaml' Oct 13 10:25:03.915: INFO: Error running /usr/local/bin/oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 get dc/a234567890123456789012345678901234567890123456789012345678 -o yaml: StdOut> Error from server (NotFound): deploymentconfigs.apps.openshift.io "a234567890123456789012345678901234567890123456789012345678" not found StdErr> Error from server (NotFound): deploymentconfigs.apps.openshift.io "a234567890123456789012345678901234567890123456789012345678" not found Oct 13 10:25:03.915: INFO: Error getting Deployment Config a234567890123456789012345678901234567890123456789012345678: exit status 1 Oct 13 10:25:03.915: INFO: Running 'oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 get dc/a2345678901234567890123456789012345678901234567890123456789 -o yaml' Oct 13 10:25:04.107: INFO: Error running /usr/local/bin/oc --namespace=e2e-test-new-app-wckrp --kubeconfig=/tmp/configfile303236974 get dc/a2345678901234567890123456789012345678901234567890123456789 -o yaml: StdOut> Error from server (NotFound): deploymentconfigs.apps.openshift.io "a2345678901234567890123456789012345678901234567890123456789" not found StdErr> Error from server (NotFound): deploymentconfigs.apps.openshift.io "a2345678901234567890123456789012345678901234567890123456789" not found Oct 13 10:25:04.107: INFO: Error getting Deployment Config a2345678901234567890123456789012345678901234567890123456789: exit status 1 [AfterEach] [sig-builds][Feature:Builds] oc new-app github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-new-app-wckrp". STEP: Found 18 events. Oct 13 10:25:04.145: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: { } Scheduled: Successfully assigned e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678-1-build to ostest-n5rnf-worker-0-94fxs Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:55 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678: {deployment-controller } ScalingReplicaSet: Scaled up replica set a234567890123456789012345678901234567890123456789012345678-fb95dd4dc to 1 Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:55 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678tb4vg" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678fx8fg" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678bhqgn" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678h7l7w" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a2345678901234567890123456789012345678901234567890123456788zv8b" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678nzlgz" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678zxbsb" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:56 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a2345678901234567890123456789012345678901234567890123456789w7gh" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:57 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: Error creating: Pod "a234567890123456789012345678901234567890123456789012345678tgvpf" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:22:58 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-fb95dd4dc: {replicaset-controller } FailedCreate: (combined from similar events): Error creating: Pod "a234567890123456789012345678901234567890123456789012345678gkclr" is invalid: spec.containers[0].image: Invalid value: " ": must not have leading or trailing whitespace Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:22 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: {multus } AddedInterface: Add eth0 [10.128.165.125/23] from kuryr Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:23 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1: {build-controller } BuildStarted: Build e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678-1 is now running Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:23 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:23 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Started: Started container git-clone Oct 13 10:25:04.145: INFO: At 2022-10-13 10:23:23 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1-build: {kubelet ostest-n5rnf-worker-0-94fxs} Created: Created container git-clone Oct 13 10:25:04.145: INFO: At 2022-10-13 10:24:59 +0000 UTC - event for a234567890123456789012345678901234567890123456789012345678-1: {build-controller } BuildFailed: Build e2e-test-new-app-wckrp/a234567890123456789012345678901234567890123456789012345678-1 failed Oct 13 10:25:04.164: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:25:04.164: INFO: a234567890123456789012345678901234567890123456789012345678-1-build ostest-n5rnf-worker-0-94fxs Failed [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC ContainersNotInitialized containers with incomplete status: [git-clone manage-dockerfile]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC ContainersNotReady containers with unready status: [sti-build]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC ContainersNotReady containers with unready status: [sti-build]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:56 +0000 UTC }] Oct 13 10:25:04.164: INFO: Oct 13 10:25:04.183: INFO: skipping dumping cluster info - cluster too large Oct 13 10:25:04.221: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-new-app-wckrp-user}, err: <nil> Oct 13 10:25:04.255: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-new-app-wckrp}, err: <nil> Oct 13 10:25:04.271: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~RGqZnbDYVSTNS-SqNEdIFwbgliGNgXsN10hXYQeuEWE}, err: <nil> [AfterEach] [sig-builds][Feature:Builds] oc new-app github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-new-app-wckrp" for this suite. fail [github.com/openshift/origin/test/extended/builds/new_app.go:68]: Unexpected error: <*errors.errorString | 0xc00295bda0>: { s: "The build \"a234567890123456789012345678901234567890123456789012345678-1\" status is \"Failed\"", } The build "a234567890123456789012345678901234567890123456789012345678-1" status is "Failed" occurred
fail [k8s.io/kubernetes@v1.22.1/test/e2e/framework/pods.go:212]: wait for pod "append-test" to succeed Expected success, but got an error: <*errors.errorString | 0xc002424430>: { s: "pod \"append-test\" failed with reason: \"\", message: \"\"", } pod "append-test" failed with reason: "", message: ""
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-imageregistry][Feature:ImageAppend] Image append github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-imageregistry][Feature:ImageAppend] Image append github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:22:28.588: INFO: configPath is now "/tmp/configfile1392854775" Oct 13 10:22:28.588: INFO: The user is now "e2e-test-image-append-brffw-user" Oct 13 10:22:28.588: INFO: Creating project "e2e-test-image-append-brffw" Oct 13 10:22:28.788: INFO: Waiting on permissions in project "e2e-test-image-append-brffw" ... Oct 13 10:22:28.796: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:22:28.909: INFO: Waiting for service account "default" secrets (default-token-q7nlh) to include dockercfg/token ... Oct 13 10:22:29.025: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:22:29.147: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:22:29.255: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:22:29.262: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:22:29.270: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:22:29.813: INFO: Project "e2e-test-image-append-brffw" has been fully provisioned. [It] should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/images/append.go:83 Oct 13 10:22:29.910: INFO: Waiting up to 3m0s for pod "append-test" in namespace "e2e-test-image-append-brffw" to be "Succeeded or Failed" Oct 13 10:22:29.930: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.023557ms Oct 13 10:22:31.947: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036883447s Oct 13 10:22:33.966: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055741039s Oct 13 10:22:35.973: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062898309s Oct 13 10:22:37.978: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0678164s Oct 13 10:22:39.989: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078382749s Oct 13 10:22:41.993: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.082545891s Oct 13 10:22:44.003: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.093114235s Oct 13 10:22:46.013: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.103008252s Oct 13 10:22:48.020: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.109845409s Oct 13 10:22:50.031: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.120782622s Oct 13 10:22:52.042: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.131599187s Oct 13 10:22:54.064: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.154090365s Oct 13 10:22:56.077: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.166415295s Oct 13 10:22:58.086: INFO: Pod "append-test": Phase="Pending", Reason="", readiness=false. Elapsed: 28.176048625s Oct 13 10:23:00.101: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 30.190695849s Oct 13 10:23:02.111: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 32.200295818s Oct 13 10:23:04.126: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 34.215567369s Oct 13 10:23:06.130: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 36.219889037s Oct 13 10:23:08.136: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 38.225965366s Oct 13 10:23:10.145: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 40.234551287s Oct 13 10:23:12.158: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 42.247262844s Oct 13 10:23:14.162: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 44.25202662s Oct 13 10:23:16.168: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 46.257989906s Oct 13 10:23:18.176: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 48.265587247s Oct 13 10:23:20.189: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 50.279241766s Oct 13 10:23:22.201: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 52.290266778s Oct 13 10:23:24.209: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 54.298909672s Oct 13 10:23:26.225: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 56.314652881s Oct 13 10:23:28.232: INFO: Pod "append-test": Phase="Running", Reason="", readiness=true. Elapsed: 58.322006445s Oct 13 10:23:30.239: INFO: Pod "append-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.328381651s Oct 13 10:23:32.246: INFO: Pod "append-test": Phase="Failed", Reason="", readiness=false. Elapsed: 1m2.335469454s [AfterEach] [sig-imageregistry][Feature:ImageAppend] Image append github.com/openshift/origin/test/extended/images/append.go:75 Oct 13 10:23:32.274: INFO: Running 'oc --namespace=e2e-test-image-append-brffw --kubeconfig=.kube/config describe pod/append-test -n e2e-test-image-append-brffw' Oct 13 10:23:32.496: INFO: Describing pod "append-test" Name: append-test Namespace: e2e-test-image-append-brffw Priority: 0 Node: ostest-n5rnf-worker-0-8kq82/10.196.2.72 Start Time: Thu, 13 Oct 2022 10:22:29 +0000 Labels: <none> Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.182.59" ], "mac": "fa:16:3e:43:63:2c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.182.59" ], "mac": "fa:16:3e:43:63:2c", "default": true, "dns": {} }] openshift.io/scc: anyuid Status: Failed IP: 10.128.182.59 IPs: IP: 10.128.182.59 Containers: test: Container ID: cri-o://9859e2ed4d4b1b0c5220ecbcf3b71919d2946354c918a298dd2cf3e3bc743f53 Image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest Image ID: image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:bc79ad0bb8570f12a3a070b2a15b1c07b81aecf10a5767d262c0f8b16e4c1bd6 Port: <none> Host Port: <none> Command: /bin/bash -c set -euo pipefail; set -x # create a scratch image with fixed date oc image append --insecure --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:scratch1 --image='{"Cmd":["/bin/sleep"]}' --created-at=0 # create a second scratch image with fixed date oc image append --insecure --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:scratch2 --image='{"Cmd":["/bin/sleep"]}' --created-at=0 # modify a shell image oc image append --insecure --from image-registry.openshift-image-registry.svc:5000/openshift/tools:latest --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:busybox1 --image '{"Cmd":["/bin/sleep"]}' # verify mounting works oc create is test2 oc image append --insecure --from image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:scratch2 --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test2:scratch2 --force # add a simple layer to the image mkdir -p /tmp/test/dir touch /tmp/test/1 touch /tmp/test/dir/2 tar cvzf /tmp/layer.tar.gz -C /tmp/test/ . oc image append --insecure --from=image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:busybox1 --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:busybox2 /tmp/layer.tar.gz State: Terminated Reason: Error Exit Code: 1 Started: Thu, 13 Oct 2022 10:22:58 +0000 Finished: Thu, 13 Oct 2022 10:23:28 +0000 Ready: False Restart Count: 0 Environment: HOME: /secret Mounts: /secret/.dockercfg from pull-secret (rw,path=".dockercfg") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5frh2 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: pull-secret: Type: Secret (a volume populated by a Secret) SecretName: builder-dockercfg-5wtd9 Optional: false kube-api-access-5frh2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 62s default-scheduler Successfully assigned e2e-test-image-append-brffw/append-test to ostest-n5rnf-worker-0-8kq82 Normal AddedInterface 35s multus Add eth0 [10.128.182.59/23] from kuryr Normal Pulling 35s kubelet Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" Normal Pulled 35s kubelet Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" in 74.854252ms Normal Created 34s kubelet Created container test Normal Started 34s kubelet Started container test Oct 13 10:23:32.496: INFO: Running 'oc --namespace=e2e-test-image-append-brffw --kubeconfig=.kube/config logs pod/append-test -c test -n e2e-test-image-append-brffw' Oct 13 10:23:32.652: INFO: Log for pod "append-test"/"test" ----> + oc image append --insecure --to image-registry.openshift-image-registry.svc:5000/e2e-test-image-append-brffw/test:scratch1 '--image={"Cmd":["/bin/sleep"]}' --created-at=0 Uploading ... failed Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers) <----end of log for "append-test"/"test" [AfterEach] [sig-imageregistry][Feature:ImageAppend] Image append github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-image-append-brffw". STEP: Found 6 events. Oct 13 10:23:32.658: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for append-test: { } Scheduled: Successfully assigned e2e-test-image-append-brffw/append-test to ostest-n5rnf-worker-0-8kq82 Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:57 +0000 UTC - event for append-test: {multus } AddedInterface: Add eth0 [10.128.182.59/23] from kuryr Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:57 +0000 UTC - event for append-test: {kubelet ostest-n5rnf-worker-0-8kq82} Pulling: Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:57 +0000 UTC - event for append-test: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" in 74.854252ms Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:58 +0000 UTC - event for append-test: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container test Oct 13 10:23:32.658: INFO: At 2022-10-13 10:22:58 +0000 UTC - event for append-test: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container test Oct 13 10:23:32.676: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:23:32.676: INFO: append-test ostest-n5rnf-worker-0-8kq82 Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:29 +0000 UTC ContainersNotReady containers with unready status: [test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:23:29 +0000 UTC ContainersNotReady containers with unready status: [test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:22:29 +0000 UTC }] Oct 13 10:23:32.676: INFO: Oct 13 10:23:32.684: INFO: skipping dumping cluster info - cluster too large Oct 13 10:23:32.860: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-image-append-brffw-user}, err: <nil> Oct 13 10:23:32.881: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-image-append-brffw}, err: <nil> Oct 13 10:23:32.896: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~roCUBuZvnpDyr56M8gjVPOdb1nOvLE_NUQgFvLll-tw}, err: <nil> [AfterEach] [sig-imageregistry][Feature:ImageAppend] Image append github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-image-append-brffw" for this suite. fail [k8s.io/kubernetes@v1.22.1/test/e2e/framework/pods.go:212]: wait for pod "append-test" to succeed Expected success, but got an error: <*errors.errorString | 0xc002424430>: { s: "pod \"append-test\" failed with reason: \"\", message: \"\"", } pod "append-test" failed with reason: "", message: ""
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-auth][Feature:OpenShiftAuthorization] authorization github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-auth][Feature:OpenShiftAuthorization] authorization github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:19:53.847: INFO: configPath is now "/tmp/configfile3242513760" Oct 13 10:19:53.847: INFO: The user is now "e2e-test-bootstrap-policy-z2g96-user" Oct 13 10:19:53.847: INFO: Creating project "e2e-test-bootstrap-policy-z2g96" Oct 13 10:19:54.132: INFO: Waiting on permissions in project "e2e-test-bootstrap-policy-z2g96" ... Oct 13 10:19:54.140: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:19:54.255: INFO: Waiting for service account "default" secrets (default-token-cp6sd) to include dockercfg/token ... Oct 13 10:19:54.349: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:19:54.456: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:19:54.573: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:19:54.594: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:19:54.602: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:19:55.134: INFO: Project "e2e-test-bootstrap-policy-z2g96" has been fully provisioned. [It] should succeed [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/authorization/authorization.go:47 [AfterEach] [sig-auth][Feature:OpenShiftAuthorization] authorization github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:19:55.180: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-bootstrap-policy-z2g96-user}, err: <nil> Oct 13 10:19:55.220: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-bootstrap-policy-z2g96}, err: <nil> Oct 13 10:19:55.249: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~R_SlYrNvyTjtuD9RyTm427KAWERS55tnzQNihXu9mKE}, err: <nil> [AfterEach] [sig-auth][Feature:OpenShiftAuthorization] authorization github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-bootstrap-policy-z2g96" for this suite. skip [github.com/openshift/origin/test/extended/authorization/authorization.go:48]: this test was in integration and didn't cover a real configuration, so it's horribly, horribly wrong now
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-devex][Feature:Templates] templateservicebroker security test github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-devex][Feature:Templates] templateservicebroker security test github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:18:45.926: INFO: configPath is now "/tmp/configfile2207960907" Oct 13 10:18:45.926: INFO: The user is now "e2e-test-templates-c2f4k-user" Oct 13 10:18:45.926: INFO: Creating project "e2e-test-templates-c2f4k" Oct 13 10:18:46.092: INFO: Waiting on permissions in project "e2e-test-templates-c2f4k" ... Oct 13 10:18:46.102: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:18:46.220: INFO: Waiting for service account "default" secrets (default-dockercfg-6qm6t,default-dockercfg-6qm6t) to include dockercfg/token ... Oct 13 10:18:46.309: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:18:46.418: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:18:46.534: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:18:46.559: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:18:46.572: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:18:47.116: INFO: Project "e2e-test-templates-c2f4k" has been fully provisioned. [JustBeforeEach] [sig-devex][Feature:Templates] templateservicebroker security test github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:46 Oct 13 10:18:47.132: INFO: The template service broker is not installed: services "apiserver" not found [AfterEach] github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:151 [AfterEach] [sig-devex][Feature:Templates] templateservicebroker security test github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:18:47.168: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-templates-c2f4k-user}, err: <nil> Oct 13 10:18:47.194: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-templates-c2f4k}, err: <nil> Oct 13 10:18:47.234: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~qqtKc6EKrzx3LZ0THxXocGf2q-yVs_QFGM2Cw7qE4nE}, err: <nil> [AfterEach] [sig-devex][Feature:Templates] templateservicebroker security test github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-templates-c2f4k" for this suite. [AfterEach] [sig-devex][Feature:Templates] templateservicebroker security test github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:78 skip [github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:50]: The template service broker is not installed: services "apiserver" not found
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:425]: Unexpected error: <errors.aggregate | len:1, cap:1>: [ { s: "promQL query returned unexpected results:\nALERTS{alertstate=~\"firing|pending\",alertname=\"AlertmanagerReceiversNotConfigured\"} == 1\n[]", }, ] promQL query returned unexpected results: ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1 [] occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/prometheus/prometheus.go:250 [It] should have a AlertmanagerReceiversNotConfigured alert in firing state [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/prometheus/prometheus.go:414 Oct 13 10:18:47.799: INFO: Creating namespace "e2e-test-prometheus-rgcjs" Oct 13 10:18:48.082: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:18:48.208: INFO: Creating new exec pod STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1 Oct 13 10:19:38.429: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"' Oct 13 10:19:38.930: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n" Oct 13 10:19:38.930: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1 Oct 13 10:19:48.936: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"' Oct 13 10:19:49.360: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n" Oct 13 10:19:49.360: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1 Oct 13 10:19:59.364: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"' Oct 13 10:19:59.770: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n" Oct 13 10:19:59.770: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1 Oct 13 10:20:09.771: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"' Oct 13 10:20:10.208: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n" Oct 13 10:20:10.208: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1 Oct 13 10:20:20.209: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-rgcjs exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1"' Oct 13 10:20:20.571: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=ALERTS%7Balertstate%3D~%22firing%7Cpending%22%2Calertname%3D%22AlertmanagerReceiversNotConfigured%22%7D+%3D%3D+1'\n" Oct 13 10:20:20.571: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" [AfterEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-prometheus-rgcjs". STEP: Found 6 events. Oct 13 10:20:30.617: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-rgcjs/execpod to ostest-n5rnf-worker-0-j4pkp Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_execpod_e2e-test-prometheus-rgcjs_aa828229-4e80-481e-91d8-9da6b7d5b4b3_0(b1197de2b83f76ff87129fc7d36e6d651057920735675d6f4561d82aebc9aa8a): error adding pod e2e-test-prometheus-rgcjs_execpod to CNI network "multus-cni-network": [e2e-test-prometheus-rgcjs/execpod/aa828229-4e80-481e-91d8-9da6b7d5b4b3:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?; Post "http://localhost:5036/addNetwork": EOF Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:37 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.159.171/23] from kuryr Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:37 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:37 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container Oct 13 10:20:30.617: INFO: At 2022-10-13 10:19:37 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container Oct 13 10:20:30.623: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:20:30.623: INFO: execpod ostest-n5rnf-worker-0-j4pkp Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:48 +0000 UTC }] Oct 13 10:20:30.623: INFO: Oct 13 10:20:30.636: INFO: skipping dumping cluster info - cluster too large [AfterEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-prometheus-rgcjs" for this suite. fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:425]: Unexpected error: <errors.aggregate | len:1, cap:1>: [ { s: "promQL query returned unexpected results:\nALERTS{alertstate=~\"firing|pending\",alertname=\"AlertmanagerReceiversNotConfigured\"} == 1\n[]", }, ] promQL query returned unexpected results: ALERTS{alertstate=~"firing|pending",alertname="AlertmanagerReceiversNotConfigured"} == 1 [] occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-api-machinery][Feature:APIServer] github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-api-machinery][Feature:APIServer] github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:18:43.571: INFO: configPath is now "/tmp/configfile4267057803" Oct 13 10:18:43.571: INFO: The user is now "e2e-test-apiserver-sc9jw-user" Oct 13 10:18:43.571: INFO: Creating project "e2e-test-apiserver-sc9jw" Oct 13 10:18:43.789: INFO: Waiting on permissions in project "e2e-test-apiserver-sc9jw" ... Oct 13 10:18:43.801: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:18:43.917: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:18:44.032: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:18:44.143: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:18:44.152: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:18:44.159: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:18:44.685: INFO: Project "e2e-test-apiserver-sc9jw" has been fully provisioned. [It] TestTLSDefaults [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/apiserver/tls.go:17 [AfterEach] [sig-api-machinery][Feature:APIServer] github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:18:44.699: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-apiserver-sc9jw-user}, err: <nil> Oct 13 10:18:44.711: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-apiserver-sc9jw}, err: <nil> Oct 13 10:18:44.722: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~AQS98cMTx386yNULGmkAWvXfzREce70pIBdW9JuJkFA}, err: <nil> [AfterEach] [sig-api-machinery][Feature:APIServer] github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-apiserver-sc9jw" for this suite. skip [github.com/openshift/origin/test/extended/apiserver/tls.go:18]: skipping because it was broken in master
fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:571]: Unexpected error: <errors.aggregate | len:2, cap:2>: [ { s: "promQL query returned unexpected results:\ntemplate_router_reload_seconds_count{job=\"router-internal-default\"} >= 1\n[]", }, { s: "promQL query returned unexpected results:\nhaproxy_server_up{job=\"router-internal-default\"} >= 1\n[]", }, ] [promQL query returned unexpected results: template_router_reload_seconds_count{job="router-internal-default"} >= 1 [], promQL query returned unexpected results: haproxy_server_up{job="router-internal-default"} >= 1 []] occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/prometheus/prometheus.go:250 [It] should provide ingress metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/prometheus/prometheus.go:536 Oct 13 10:18:42.882: INFO: Creating namespace "e2e-test-prometheus-z4ls2" Oct 13 10:18:43.172: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:18:43.285: INFO: Creating new exec pod Oct 13 10:19:37.360: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/targets"' Oct 13 10:19:38.103: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/targets\n" Oct 13 10:19:38.120: INFO: stdout: "{\"status\":\"success\",\"data\":{\"activeTargets\":[{\"discoveredLabels\":{\"__address__\":\"10.128.97.62:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-apiserver-operator-546f548b78-l7cdh\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openshift-apiserver-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-apiserver-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.97.62\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:eb:3e:cf\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.97.62\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:eb:3e:cf\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-apiserver-operator-546f548b78\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.97.62\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"546f548b78\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-apiserver-operator-546f548b78-l7cdh\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7e82d129-a6cb-4990-a7d9-bc53374a0a30\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-apiserver-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openshift-apiserver-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0\"},\"labels\":{\"container\":\"openshift-apiserver-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.97.62:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-apiserver-operator\",\"pod\":\"openshift-apiserver-operator-546f548b78-l7cdh\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0\",\"scrapeUrl\":\"https://10.128.97.62:8443/metrics\",\"globalUrl\":\"https://10.128.97.62:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:12.253178155Z\",\"lastScrapeDuration\":0.027244114,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"},\"labels\":{\"container\":\"openshift-apiserver-check-endpoints\",\"endpoint\":\"check-endpoints\",\"instance\":\"10.128.120.187:17698\",\"job\":\"check-endpoints\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-6sffs\",\"service\":\"check-endpoints\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\",\"scrapeUrl\":\"https://10.128.120.187:17698/metrics\",\"globalUrl\":\"https://10.128.120.187:17698/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.901340962Z\",\"lastScrapeDuration\":0.012753588,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"},\"labels\":{\"container\":\"openshift-apiserver-check-endpoints\",\"endpoint\":\"check-endpoints\",\"instance\":\"10.128.120.232:17698\",\"job\":\"check-endpoints\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-kctsl\",\"service\":\"check-endpoints\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\",\"scrapeUrl\":\"https://10.128.120.232:17698/metrics\",\"globalUrl\":\"https://10.128.120.232:17698/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.742824904Z\",\"lastScrapeDuration\":0.027122114,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"},\"labels\":{\"container\":\"openshift-apiserver-check-endpoints\",\"endpoint\":\"check-endpoints\",\"instance\":\"10.128.121.9:17698\",\"job\":\"check-endpoints\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-cwl5l\",\"service\":\"check-endpoints\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\",\"scrapeUrl\":\"https://10.128.121.9:17698/metrics\",\"globalUrl\":\"https://10.128.121.9:17698/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.161852166Z\",\"lastScrapeDuration\":0.01050134,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"},\"labels\":{\"apiserver\":\"openshift-apiserver\",\"container\":\"openshift-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.128.121.9:8443\",\"job\":\"api\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-cwl5l\",\"service\":\"api\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\",\"scrapeUrl\":\"https://10.128.121.9:8443/metrics\",\"globalUrl\":\"https://10.128.121.9:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:30.443263198Z\",\"lastScrapeDuration\":0.149381955,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"},\"labels\":{\"apiserver\":\"openshift-apiserver\",\"container\":\"openshift-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.128.120.187:8443\",\"job\":\"api\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-6sffs\",\"service\":\"api\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\",\"scrapeUrl\":\"https://10.128.120.187:8443/metrics\",\"globalUrl\":\"https://10.128.120.187:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.865818889Z\",\"lastScrapeDuration\":0.091865915,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"},\"labels\":{\"apiserver\":\"openshift-apiserver\",\"container\":\"openshift-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.128.120.232:8443\",\"job\":\"api\",\"namespace\":\"openshift-apiserver\",\"pod\":\"apiserver-bfb9686df-kctsl\",\"service\":\"api\"},\"scrapePool\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\",\"scrapeUrl\":\"https://10.128.120.232:8443/metrics\",\"globalUrl\":\"https://10.128.120.232:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:26.185870109Z\",\"lastScrapeDuration\":0.189400723,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.74.228:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"authentication-operator-788b66459f-ddzdg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"authentication-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-authentication-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.74.228\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5e:85:e3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.74.228\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5e:85:e3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"authentication-operator-788b66459f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.74.228\",\"__meta_kubernetes_pod_label_app\":\"authentication-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"788b66459f\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"authentication-operator-788b66459f-ddzdg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"56ea3d02-f1ac-40f9-bc17-195d5e8f43c5\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"authentication-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-authentication-operator/authentication-operator/0\"},\"labels\":{\"endpoint\":\"https\",\"instance\":\"10.128.74.228:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-authentication-operator\",\"pod\":\"authentication-operator-788b66459f-ddzdg\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-authentication-operator/authentication-operator/0\",\"scrapeUrl\":\"https://10.128.74.228:8443/metrics\",\"globalUrl\":\"https://10.128.74.228:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:22.790831556Z\",\"lastScrapeDuration\":0.059413039,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.116.141:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"oauth-openshift-7bc4d9f744-kvtwf\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"oauth-openshift\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"oauth-openshift\",\"__meta_kubernetes_namespace\":\"openshift-authentication\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.116.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:32:4d:81\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.116.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:32:4d:81\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_rvs_hash\":\"LhN4C_Fs9e4EBOG_HQKm0RnNParQYltKPI8fdru6ddi1ygGnkCHd59ZZVk38n0YN1dHxwUHSoERB6MLYDRL3xw\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_rvs_hash\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-openshift\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"oauth-openshift-7bc4d9f744\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.116.141\",\"__meta_kubernetes_pod_label_app\":\"oauth-openshift\",\"__meta_kubernetes_pod_label_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bc4d9f744\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"oauth-openshift-7bc4d9f744-kvtwf\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"334a4238-d82f-43ff-8ddb-57da32fac6cb\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"v4-0-config-system-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"oauth-openshift\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"oauth-openshift\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\"},\"labels\":{\"container\":\"oauth-openshift\",\"endpoint\":\"https\",\"instance\":\"10.128.116.141:6443\",\"job\":\"oauth-openshift\",\"namespace\":\"openshift-authentication\",\"pod\":\"oauth-openshift-7bc4d9f744-kvtwf\",\"service\":\"oauth-openshift\"},\"scrapePool\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\",\"scrapeUrl\":\"https://10.128.116.141:6443/metrics\",\"globalUrl\":\"https://10.128.116.141:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.110052972Z\",\"lastScrapeDuration\":0.045585481,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.116.190:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"oauth-openshift-7bc4d9f744-rmcd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"oauth-openshift\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"oauth-openshift\",\"__meta_kubernetes_namespace\":\"openshift-authentication\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.116.190\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b2:42:17\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.116.190\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b2:42:17\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_rvs_hash\":\"LhN4C_Fs9e4EBOG_HQKm0RnNParQYltKPI8fdru6ddi1ygGnkCHd59ZZVk38n0YN1dHxwUHSoERB6MLYDRL3xw\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_rvs_hash\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-openshift\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"oauth-openshift-7bc4d9f744\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.116.190\",\"__meta_kubernetes_pod_label_app\":\"oauth-openshift\",\"__meta_kubernetes_pod_label_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bc4d9f744\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"oauth-openshift-7bc4d9f744-rmcd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fe4a836b-2edb-4051-a184-a493c373cdcf\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"v4-0-config-system-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"oauth-openshift\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"oauth-openshift\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\"},\"labels\":{\"container\":\"oauth-openshift\",\"endpoint\":\"https\",\"instance\":\"10.128.116.190:6443\",\"job\":\"oauth-openshift\",\"namespace\":\"openshift-authentication\",\"pod\":\"oauth-openshift-7bc4d9f744-rmcd6\",\"service\":\"oauth-openshift\"},\"scrapePool\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\",\"scrapeUrl\":\"https://10.128.116.190:6443/metrics\",\"globalUrl\":\"https://10.128.116.190:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.125961796Z\",\"lastScrapeDuration\":0.035315824,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.116.139:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"oauth-openshift-7bc4d9f744-nwqnk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"oauth-openshift\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"oauth-openshift\",\"__meta_kubernetes_namespace\":\"openshift-authentication\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.116.139\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:58:53:4d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.116.139\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:58:53:4d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_rvs_hash\":\"LhN4C_Fs9e4EBOG_HQKm0RnNParQYltKPI8fdru6ddi1ygGnkCHd59ZZVk38n0YN1dHxwUHSoERB6MLYDRL3xw\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_bootstrap_user_exists\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_rvs_hash\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-openshift\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"oauth-openshift-7bc4d9f744\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.116.139\",\"__meta_kubernetes_pod_label_app\":\"oauth-openshift\",\"__meta_kubernetes_pod_label_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bc4d9f744\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_openshift_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"oauth-openshift-7bc4d9f744-nwqnk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"319ceeed-1af7-4b29-bd77-7844ecad2b19\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"v4-0-config-system-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"oauth-openshift\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"oauth-openshift\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\"},\"labels\":{\"container\":\"oauth-openshift\",\"endpoint\":\"https\",\"instance\":\"10.128.116.139:6443\",\"job\":\"oauth-openshift\",\"namespace\":\"openshift-authentication\",\"pod\":\"oauth-openshift-7bc4d9f744-nwqnk\",\"service\":\"oauth-openshift\"},\"scrapePool\":\"serviceMonitor/openshift-authentication/oauth-openshift/0\",\"scrapeUrl\":\"https://10.128.116.139:6443/metrics\",\"globalUrl\":\"https://10.128.116.139:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:08.81740541Z\",\"lastScrapeDuration\":0.045039532,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.62.5:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cloud-credential-operator-5dc9b88859-x9ckp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cco-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cloud-credential-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.62.5\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:47:5f:af\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.62.5\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:47:5f:af\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cloud-credential-operator-5dc9b88859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.62.5\",\"__meta_kubernetes_pod_label_app\":\"cloud-credential-operator\",\"__meta_kubernetes_pod_label_control_plane\":\"controller-manager\",\"__meta_kubernetes_pod_label_controller_tools_k8s_io\":\"1.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5dc9b88859\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_control_plane\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_tools_k8s_io\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cloud-credential-operator-5dc9b88859-x9ckp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b04bf08f-5ee4-4230-a764-4b9450a669b0\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cloud-credential-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"cco-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.62.5:8443\",\"job\":\"cco-metrics\",\"namespace\":\"openshift-cloud-credential-operator\",\"pod\":\"cloud-credential-operator-5dc9b88859-x9ckp\",\"service\":\"cco-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0\",\"scrapeUrl\":\"https://10.128.62.5:8443/metrics\",\"globalUrl\":\"https://10.128.62.5:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.468746684Z\",\"lastScrapeDuration\":0.01799071,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"},\"labels\":{\"container\":\"provisioner-kube-rbac-proxy\",\"endpoint\":\"provisioner-m\",\"instance\":\"10.196.0.105:9202\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\",\"scrapeUrl\":\"https://10.196.0.105:9202/metrics\",\"globalUrl\":\"https://10.196.0.105:9202/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:15.247525211Z\",\"lastScrapeDuration\":0.005503444,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"},\"labels\":{\"container\":\"provisioner-kube-rbac-proxy\",\"endpoint\":\"provisioner-m\",\"instance\":\"10.196.3.178:9202\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\",\"scrapeUrl\":\"https://10.196.3.178:9202/metrics\",\"globalUrl\":\"https://10.196.3.178:9202/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.426686928Z\",\"lastScrapeDuration\":0.007045632,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"},\"labels\":{\"container\":\"attacher-kube-rbac-proxy\",\"endpoint\":\"attacher-m\",\"instance\":\"10.196.0.105:9203\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\",\"scrapeUrl\":\"https://10.196.0.105:9203/metrics\",\"globalUrl\":\"https://10.196.0.105:9203/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.157015604Z\",\"lastScrapeDuration\":0.004119542,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"},\"labels\":{\"container\":\"attacher-kube-rbac-proxy\",\"endpoint\":\"attacher-m\",\"instance\":\"10.196.3.178:9203\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\",\"scrapeUrl\":\"https://10.196.3.178:9203/metrics\",\"globalUrl\":\"https://10.196.3.178:9203/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.689467811Z\",\"lastScrapeDuration\":0.007335703,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"},\"labels\":{\"container\":\"resizer-kube-rbac-proxy\",\"endpoint\":\"resizer-m\",\"instance\":\"10.196.0.105:9204\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\",\"scrapeUrl\":\"https://10.196.0.105:9204/metrics\",\"globalUrl\":\"https://10.196.0.105:9204/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.45930964Z\",\"lastScrapeDuration\":0.003662094,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"},\"labels\":{\"container\":\"resizer-kube-rbac-proxy\",\"endpoint\":\"resizer-m\",\"instance\":\"10.196.3.178:9204\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\",\"scrapeUrl\":\"https://10.196.3.178:9204/metrics\",\"globalUrl\":\"https://10.196.3.178:9204/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:08.527401721Z\",\"lastScrapeDuration\":0.003125686,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"},\"labels\":{\"container\":\"snapshotter-kube-rbac-proxy\",\"endpoint\":\"snapshotter-m\",\"instance\":\"10.196.0.105:9205\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\",\"scrapeUrl\":\"https://10.196.0.105:9205/metrics\",\"globalUrl\":\"https://10.196.0.105:9205/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.370489063Z\",\"lastScrapeDuration\":0.003117199,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"},\"labels\":{\"container\":\"snapshotter-kube-rbac-proxy\",\"endpoint\":\"snapshotter-m\",\"instance\":\"10.196.3.178:9205\",\"job\":\"openstack-cinder-csi-driver-controller-metrics\",\"namespace\":\"openshift-cluster-csi-drivers\",\"pod\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"service\":\"openstack-cinder-csi-driver-controller-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\",\"scrapeUrl\":\"https://10.196.3.178:9205/metrics\",\"globalUrl\":\"https://10.196.3.178:9205/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.608231784Z\",\"lastScrapeDuration\":0.005526959,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-approver-d4748548d-wc7k6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"machine-approver\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-approver\",\"__meta_kubernetes_namespace\":\"openshift-cluster-machine-approver\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-approver-d4748548d\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"machine-approver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"d4748548d\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-approver-d4748548d-wc7k6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5f2960a2-9fac-4af6-a7a2-3acecdf0994c\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-approver-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"machine-approver\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-approver\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:9192\",\"job\":\"machine-approver\",\"namespace\":\"openshift-cluster-machine-approver\",\"pod\":\"machine-approver-d4748548d-wc7k6\",\"service\":\"machine-approver\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0\",\"scrapeUrl\":\"https://10.196.0.105:9192/metrics\",\"globalUrl\":\"https://10.196.0.105:9192/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.449590327Z\",\"lastScrapeDuration\":0.018503683,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.33.187:60000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-node-tuning-operator-6497f89df8-trnb7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"node-tuning-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-tuning-operator\",\"__meta_kubernetes_namespace\":\"openshift-cluster-node-tuning-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.33.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:05:4e:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.33.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:05:4e:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-node-tuning-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-node-tuning-operator-6497f89df8\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.33.187\",\"__meta_kubernetes_pod_label_name\":\"cluster-node-tuning-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6497f89df8\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-node-tuning-operator-6497f89df8-trnb7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"56602893-aede-4034-a781-8e61a61108ee\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-tuning-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"node-tuning-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"node-tuning-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0\"},\"labels\":{\"container\":\"cluster-node-tuning-operator\",\"endpoint\":\"60000\",\"instance\":\"10.128.33.187:60000\",\"job\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\",\"pod\":\"cluster-node-tuning-operator-6497f89df8-trnb7\",\"service\":\"node-tuning-operator\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0\",\"scrapeUrl\":\"https://10.128.33.187:60000/metrics\",\"globalUrl\":\"https://10.128.33.187:60000/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:18:52.788010494Z\",\"lastScrapeDuration\":0.005562724,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.27.226:60000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-samples-operator-84c8d6b664-5s6ss\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"cluster-samples-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-samples-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.27.226\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e2:8a:b7\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.27.226\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e2:8a:b7\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-samples-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-samples-operator-84c8d6b664\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.27.226\",\"__meta_kubernetes_pod_label_name\":\"cluster-samples-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"84c8d6b664\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-samples-operator-84c8d6b664-5s6ss\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b74e39c3-0fad-4f9d-a03b-b5f51a1cf857\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"samples-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"cluster-samples-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0\"},\"labels\":{\"container\":\"cluster-samples-operator\",\"endpoint\":\"60000\",\"instance\":\"10.128.27.226:60000\",\"job\":\"metrics\",\"namespace\":\"openshift-cluster-samples-operator\",\"pod\":\"cluster-samples-operator-84c8d6b664-5s6ss\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0\",\"scrapeUrl\":\"https://10.128.27.226:60000/metrics\",\"globalUrl\":\"https://10.128.27.226:60000/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:18:44.104524245Z\",\"lastScrapeDuration\":0.010710215,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.52.71:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-storage-operator-769c6b74d9-8rp8q\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-storage-operator-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-storage-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-storage-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.52.71\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fa:c9:ff\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.52.71\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fa:c9:ff\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-storage-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-storage-operator-769c6b74d9\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.52.71\",\"__meta_kubernetes_pod_label_name\":\"cluster-storage-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"769c6b74d9\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-storage-operator-769c6b74d9-8rp8q\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4281d8e7-f78d-47b3-bcc8-e4e74080e804\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-storage-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-storage-operator-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-storage-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\"},\"labels\":{\"container\":\"cluster-storage-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.52.71:8443\",\"job\":\"cluster-storage-operator-metrics\",\"namespace\":\"openshift-cluster-storage-operator\",\"pod\":\"cluster-storage-operator-769c6b74d9-8rp8q\",\"service\":\"cluster-storage-operator-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\",\"scrapeUrl\":\"https://10.128.52.71:8443/metrics\",\"globalUrl\":\"https://10.128.52.71:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.581041121Z\",\"lastScrapeDuration\":0.023721771,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9099\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-version-operator-765fc9d8cb-86btb\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-version-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-version-operator\",\"__meta_kubernetes_namespace\":\"openshift-cluster-version\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-version-operator-765fc9d8cb\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-version-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"765fc9d8cb\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-version-operator-765fc9d8cb-86btb\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8657bbe8-1946-4615-a11a-753ad48ee115\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-version-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-version-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-version-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-version/cluster-version-operator/0\"},\"labels\":{\"endpoint\":\"metrics\",\"instance\":\"10.196.3.187:9099\",\"job\":\"cluster-version-operator\",\"namespace\":\"openshift-cluster-version\",\"pod\":\"cluster-version-operator-765fc9d8cb-86btb\",\"service\":\"cluster-version-operator\"},\"scrapePool\":\"serviceMonitor/openshift-cluster-version/cluster-version-operator/0\",\"scrapeUrl\":\"https://10.196.3.187:9099/metrics\",\"globalUrl\":\"https://10.196.3.187:9099/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.601477518Z\",\"lastScrapeDuration\":0.017908332,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.73.213:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-config-operator-5654d7f9fc-dr2kj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openshift-config-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-config-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.73.213\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:3a:75:7b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.73.213\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:3a:75:7b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-config-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-config-operator-5654d7f9fc\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.73.213\",\"__meta_kubernetes_pod_label_app\":\"openshift-config-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5654d7f9fc\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-config-operator-5654d7f9fc-dr2kj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a4e299b6-fee2-4b7f-8411-b2b6980e2cbc\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"config-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openshift-config-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-config-operator/config-operator/0\"},\"labels\":{\"container\":\"openshift-config-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.73.213:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-config-operator\",\"pod\":\"openshift-config-operator-5654d7f9fc-dr2kj\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-config-operator/config-operator/0\",\"scrapeUrl\":\"https://10.128.73.213:8443/metrics\",\"globalUrl\":\"https://10.128.73.213:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.811022777Z\",\"lastScrapeDuration\":0.026739778,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.133.246:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"console-operator-7dbd68dd4b-44sxf\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"console-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-console-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.133.246\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:7b:40:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.133.246\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:7b:40:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"console-operator-7dbd68dd4b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.133.246\",\"__meta_kubernetes_pod_label_name\":\"console-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7dbd68dd4b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"console-operator-7dbd68dd4b-44sxf\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e9f337bf-a4d7-43c4-b3f1-154403484b7f\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"console-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-console-operator/console-operator/0\"},\"labels\":{\"endpoint\":\"https\",\"instance\":\"10.128.133.246:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-console-operator\",\"pod\":\"console-operator-7dbd68dd4b-44sxf\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-console-operator/console-operator/0\",\"scrapeUrl\":\"https://10.128.133.246:8443/metrics\",\"globalUrl\":\"https://10.128.133.246:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:11.256323704Z\",\"lastScrapeDuration\":0.039847012,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.48.110:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-controller-manager-operator-68c4bd4c8-tgrgc\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.48.110\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:16:a6:05\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.48.110\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:16:a6:05\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-controller-manager-operator-68c4bd4c8\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.48.110\",\"__meta_kubernetes_pod_label_app\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"68c4bd4c8\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-controller-manager-operator-68c4bd4c8-tgrgc\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8df194bb-c941-4319-a18b-c2943ee1c557\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-controller-manager-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openshift-controller-manager-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0\"},\"labels\":{\"container\":\"openshift-controller-manager-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.48.110:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-controller-manager-operator\",\"pod\":\"openshift-controller-manager-operator-68c4bd4c8-tgrgc\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0\",\"scrapeUrl\":\"https://10.128.48.110:8443/metrics\",\"globalUrl\":\"https://10.128.48.110:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.44586628Z\",\"lastScrapeDuration\":0.024415192,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.110.148:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"controller-manager-p9snj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.110.148\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:db:0c:b5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.110.148\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:db:0c:b5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_force\":\"9c8024de-583f-4c3a-98c3-9520f9a74d10\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_force\":\"true\",\"__meta_kubernetes_pod_container_name\":\"controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"controller-manager\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.110.148\",\"__meta_kubernetes_pod_label_app\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_label_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7664fc7754\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"12\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"controller-manager-p9snj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"9576865f-eaac-48f5-9682-a7737ad33b3a\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\"},\"labels\":{\"container\":\"controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.128.110.148:8443\",\"job\":\"controller-manager\",\"namespace\":\"openshift-controller-manager\",\"pod\":\"controller-manager-p9snj\",\"service\":\"controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\",\"scrapeUrl\":\"https://10.128.110.148:8443/metrics\",\"globalUrl\":\"https://10.128.110.148:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.931325552Z\",\"lastScrapeDuration\":0.005279165,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.110.159:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"controller-manager-fq5jx\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.110.159\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:de:ca:d0\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.110.159\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:de:ca:d0\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_force\":\"9c8024de-583f-4c3a-98c3-9520f9a74d10\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_force\":\"true\",\"__meta_kubernetes_pod_container_name\":\"controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"controller-manager\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.110.159\",\"__meta_kubernetes_pod_label_app\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_label_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7664fc7754\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"12\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"controller-manager-fq5jx\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57c93add-4cd2-4295-b3eb-51de98766ecf\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\"},\"labels\":{\"container\":\"controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.128.110.159:8443\",\"job\":\"controller-manager\",\"namespace\":\"openshift-controller-manager\",\"pod\":\"controller-manager-fq5jx\",\"service\":\"controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\",\"scrapeUrl\":\"https://10.128.110.159:8443/metrics\",\"globalUrl\":\"https://10.128.110.159:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.302061596Z\",\"lastScrapeDuration\":0.018836283,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.111.48:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"controller-manager-2zdvm\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.111.48\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:c9:36\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.111.48\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:c9:36\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_force\":\"9c8024de-583f-4c3a-98c3-9520f9a74d10\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_force\":\"true\",\"__meta_kubernetes_pod_container_name\":\"controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"controller-manager\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.111.48\",\"__meta_kubernetes_pod_label_app\":\"openshift-controller-manager\",\"__meta_kubernetes_pod_label_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7664fc7754\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"12\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"controller-manager-2zdvm\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"274636e4-c599-4b2c-8b13-1863af739102\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\"},\"labels\":{\"container\":\"controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.128.111.48:8443\",\"job\":\"controller-manager\",\"namespace\":\"openshift-controller-manager\",\"pod\":\"controller-manager-2zdvm\",\"service\":\"controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-controller-manager/openshift-controller-manager/0\",\"scrapeUrl\":\"https://10.128.111.48:8443/metrics\",\"globalUrl\":\"https://10.128.111.48:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.873227554Z\",\"lastScrapeDuration\":0.009550715,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.37.87:9393\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-operator-66f5f8df4f-7v8dq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"dns-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-dns-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.37.87\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a1:98:f6\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.37.87\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a1:98:f6\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9393\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-operator-66f5f8df4f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.37.87\",\"__meta_kubernetes_pod_label_name\":\"dns-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"66f5f8df4f\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-operator-66f5f8df4f-7v8dq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"dda1bc99-60c7-4ad3-a55a-8d1ef8728649\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"dns-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns-operator/dns-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.37.87:9393\",\"job\":\"metrics\",\"namespace\":\"openshift-dns-operator\",\"pod\":\"dns-operator-66f5f8df4f-7v8dq\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-dns-operator/dns-operator/0\",\"scrapeUrl\":\"https://10.128.37.87:9393/metrics\",\"globalUrl\":\"https://10.128.37.87:9393/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:26.292832122Z\",\"lastScrapeDuration\":0.018784759,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.127.52:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-hpsll\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.52\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.52\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.127.52\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-hpsll\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ae463ca1-be02-483f-9849-3e204beb4658\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.127.52:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-hpsll\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.127.52:9154/metrics\",\"globalUrl\":\"https://10.128.127.52:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.367484992Z\",\"lastScrapeDuration\":0.006761172,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.126.114:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.126.114\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"33957bcb-47be-49a6-83ad-300d0d7ffb69\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.126.114:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-wzmlj\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.126.114:9154/metrics\",\"globalUrl\":\"https://10.128.126.114:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:11.287688307Z\",\"lastScrapeDuration\":0.007791984,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.126.55:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.55\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.55\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.126.55\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f5ce003d-9392-40ac-a34e-8aa47c675f95\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.126.55:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-xb9vg\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.126.55:9154/metrics\",\"globalUrl\":\"https://10.128.126.55:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.961703077Z\",\"lastScrapeDuration\":0.012521505,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.126.73:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-n757c\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.73\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.73\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.126.73\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-n757c\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"22ea4790-c277-42c5-879d-f80c4aaa075d\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.126.73:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-n757c\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.126.73:9154/metrics\",\"globalUrl\":\"https://10.128.126.73:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.417278358Z\",\"lastScrapeDuration\":0.005076544,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.127.108:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-25bww\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.108\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.108\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.127.108\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-25bww\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"c0db5e71-94aa-4c0a-b650-7e5e3cb98e3e\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.127.108:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-25bww\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.127.108:9154/metrics\",\"globalUrl\":\"https://10.128.127.108:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.952339361Z\",\"lastScrapeDuration\":0.009716933,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.127.168:9154\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.168\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.168\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9154\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.127.168\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"31663356-b33c-43ae-a208-ed3064fcf0ee\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.127.168:9154\",\"job\":\"dns-default\",\"namespace\":\"openshift-dns\",\"pod\":\"dns-default-x6w5l\",\"service\":\"dns-default\"},\"scrapePool\":\"serviceMonitor/openshift-dns/dns-default/0\",\"scrapeUrl\":\"https://10.128.127.168:9154/metrics\",\"globalUrl\":\"https://10.128.127.168:9154/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.954931462Z\",\"lastScrapeDuration\":0.005886826,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.40.74:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-operator-764984fdd-cqns7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"etcd-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-etcd-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.40.74\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:76:d6:be\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.40.74\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:76:d6:be\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"etcd-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"etcd-operator-764984fdd\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.40.74\",\"__meta_kubernetes_pod_label_app\":\"etcd-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"764984fdd\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-operator-764984fdd-cqns7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"24cb69e2-236c-45d1-ba9b-5951cbc0b6e8\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"etcd-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"etcd-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-etcd-operator/etcd-operator/0\"},\"labels\":{\"container\":\"etcd-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.40.74:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-etcd-operator\",\"pod\":\"etcd-operator-764984fdd-cqns7\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-etcd-operator/etcd-operator/0\",\"scrapeUrl\":\"https://10.128.40.74:8443/metrics\",\"globalUrl\":\"https://10.128.40.74:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:14.651715269Z\",\"lastScrapeDuration\":0.103086036,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.83.151:60000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"image-registry-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"image-registry-operator\",\"__meta_kubernetes_namespace\":\"openshift-image-registry\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.83.151\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ca:de:36\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.83.151\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ca:de:36\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-image-registry-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-image-registry-operator-6cfc44cd58\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.83.151\",\"__meta_kubernetes_pod_label_name\":\"cluster-image-registry-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6cfc44cd58\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6f65971b-96c4-4cbd-9b8f-df3a6984fed3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"image-registry-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"image-registry-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"image-registry-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-image-registry/image-registry-operator/0\"},\"labels\":{\"container\":\"cluster-image-registry-operator\",\"endpoint\":\"60000\",\"instance\":\"10.128.83.151:60000\",\"job\":\"image-registry-operator\",\"namespace\":\"openshift-image-registry\",\"pod\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"service\":\"image-registry-operator\"},\"scrapePool\":\"serviceMonitor/openshift-image-registry/image-registry-operator/0\",\"scrapeUrl\":\"https://10.128.83.151:60000/metrics\",\"globalUrl\":\"https://10.128.83.151:60000/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.966396173Z\",\"lastScrapeDuration\":0.003201038,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.83.90:5000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"image-registry-5dcfbfdb49-m9mjk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"5000-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_docker_registry\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_endpoints_name\":\"image-registry\",\"__meta_kubernetes_namespace\":\"openshift-image-registry\",\"__meta_kubernetes_pod_annotation_imageregistry_operator_openshift_io_dependencies_checksum\":\"sha256:c2e4379a3614d3c6245d6a72b78f2bc288bf39df517d68b7c6dd5439a409036c\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.83.90\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1e:6d:d3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.83.90\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1e:6d:d3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_imageregistry_operator_openshift_io_dependencies_checksum\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry\",\"__meta_kubernetes_pod_container_port_number\":\"5000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"image-registry-5dcfbfdb49\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.83.90\",\"__meta_kubernetes_pod_label_docker_registry\":\"default\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5dcfbfdb49\",\"__meta_kubernetes_pod_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"image-registry-5dcfbfdb49-m9mjk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b6cdb3a-3f4f-4e5e-8e6c-5dda0d62ec22\",\"__meta_kubernetes_service_annotation_imageregistry_operator_openshift_io_checksum\":\"sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"image-registry-tls\",\"__meta_kubernetes_service_annotationpresent_imageregistry_operator_openshift_io_checksum\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_docker_registry\":\"default\",\"__meta_kubernetes_service_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_service_name\":\"image-registry\",\"__metrics_path__\":\"/extensions/v2/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-image-registry/image-registry/0\"},\"labels\":{\"container\":\"registry\",\"endpoint\":\"5000-tcp\",\"instance\":\"10.128.83.90:5000\",\"job\":\"image-registry\",\"namespace\":\"openshift-image-registry\",\"pod\":\"image-registry-5dcfbfdb49-m9mjk\",\"service\":\"image-registry\"},\"scrapePool\":\"serviceMonitor/openshift-image-registry/image-registry/0\",\"scrapeUrl\":\"https://10.128.83.90:5000/extensions/v2/metrics\",\"globalUrl\":\"https://10.128.83.90:5000/extensions/v2/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.514785918Z\",\"lastScrapeDuration\":0.039108771,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.59.173:9393\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"ingress-operator-854bc688f9-lg2hg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"ingress-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-ingress-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.59.173\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d0:47:b1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.59.173\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d0:47:b1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9393\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"ingress-operator-854bc688f9\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.59.173\",\"__meta_kubernetes_pod_label_name\":\"ingress-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"854bc688f9\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"ingress-operator-854bc688f9-lg2hg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"433cff22-73d0-4f33-bd96-649e821932f7\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"ingress-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress-operator/ingress-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.59.173:9393\",\"job\":\"metrics\",\"namespace\":\"openshift-ingress-operator\",\"pod\":\"ingress-operator-854bc688f9-lg2hg\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-ingress-operator/ingress-operator/0\",\"scrapeUrl\":\"https://10.128.59.173:9393/metrics\",\"globalUrl\":\"https://10.128.59.173:9393/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.319097889Z\",\"lastScrapeDuration\":0.02144239,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:1936\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"1936\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7a994a2f-c4ec-4a4c-b4ae-b9ef7f93bb00\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"},\"labels\":{\"container\":\"router\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.199:1936\",\"job\":\"router-internal-default\",\"namespace\":\"openshift-ingress\",\"pod\":\"router-default-697ff75b79-qcfbg\",\"service\":\"router-internal-default\"},\"scrapePool\":\"serviceMonitor/openshift-ingress/router-default/0\",\"scrapeUrl\":\"https://10.196.0.199:1936/metrics\",\"globalUrl\":\"https://10.196.0.199:1936/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:08.778124155Z\",\"lastScrapeDuration\":0.035368818,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:1936\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"1936\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"74040c8a-de64-4dff-943f-8e9a926a790e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"},\"labels\":{\"container\":\"router\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.169:1936\",\"job\":\"router-internal-default\",\"namespace\":\"openshift-ingress\",\"pod\":\"router-default-697ff75b79-t6b78\",\"service\":\"router-internal-default\"},\"scrapePool\":\"serviceMonitor/openshift-ingress/router-default/0\",\"scrapeUrl\":\"https://10.196.2.169:1936/metrics\",\"globalUrl\":\"https://10.196.2.169:1936/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.944375108Z\",\"lastScrapeDuration\":0.019583431,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.29.145:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"insights-operator-54767897df-vbchm\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"insights-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-insights\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.29.145\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:30:86:ed\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.29.145\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:30:86:ed\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"insights-operator\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"insights-operator-54767897df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.29.145\",\"__meta_kubernetes_pod_label_app\":\"insights-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"54767897df\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"insights-operator-54767897df-vbchm\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"c1cc781b-ec36-43c0-be31-e31e50df6f49\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-insights-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"insights-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-insights/insights-operator/0\"},\"labels\":{\"container\":\"insights-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.29.145:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-insights\",\"pod\":\"insights-operator-54767897df-vbchm\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-insights/insights-operator/0\",\"scrapeUrl\":\"https://10.128.29.145:8443/metrics\",\"globalUrl\":\"https://10.128.29.145:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:26.18982589Z\",\"lastScrapeDuration\":0.031010874,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.87.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-apiserver-operator-7f59b6f8c4-jthtm\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kube-apiserver-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-kube-apiserver-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.87.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:9a:30:26\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.87.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:9a:30:26\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-apiserver-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-apiserver-operator-7f59b6f8c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.87.239\",\"__meta_kubernetes_pod_label_app\":\"kube-apiserver-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7f59b6f8c4\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-apiserver-operator-7f59b6f8c4-jthtm\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f97319cf-40ed-4a80-837a-cb028bc49508\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"kube-apiserver-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"kube-apiserver-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0\"},\"labels\":{\"container\":\"kube-apiserver-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.87.239:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-kube-apiserver-operator\",\"pod\":\"kube-apiserver-operator-7f59b6f8c4-jthtm\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0\",\"scrapeUrl\":\"https://10.128.87.239:8443/metrics\",\"globalUrl\":\"https://10.128.87.239:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.65832312Z\",\"lastScrapeDuration\":0.0214069,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:6443\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubernetes\",\"__meta_kubernetes_namespace\":\"default\",\"__meta_kubernetes_service_label_component\":\"apiserver\",\"__meta_kubernetes_service_label_provider\":\"kubernetes\",\"__meta_kubernetes_service_labelpresent_component\":\"true\",\"__meta_kubernetes_service_labelpresent_provider\":\"true\",\"__meta_kubernetes_service_name\":\"kubernetes\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\"},\"labels\":{\"apiserver\":\"kube-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:6443\",\"job\":\"apiserver\",\"namespace\":\"default\",\"service\":\"kubernetes\"},\"scrapePool\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\",\"scrapeUrl\":\"https://10.196.0.105:6443/metrics\",\"globalUrl\":\"https://10.196.0.105:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:27.79305226Z\",\"lastScrapeDuration\":0.244647646,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:6443\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubernetes\",\"__meta_kubernetes_namespace\":\"default\",\"__meta_kubernetes_service_label_component\":\"apiserver\",\"__meta_kubernetes_service_label_provider\":\"kubernetes\",\"__meta_kubernetes_service_labelpresent_component\":\"true\",\"__meta_kubernetes_service_labelpresent_provider\":\"true\",\"__meta_kubernetes_service_name\":\"kubernetes\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\"},\"labels\":{\"apiserver\":\"kube-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.196.3.178:6443\",\"job\":\"apiserver\",\"namespace\":\"default\",\"service\":\"kubernetes\"},\"scrapePool\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\",\"scrapeUrl\":\"https://10.196.3.178:6443/metrics\",\"globalUrl\":\"https://10.196.3.178:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.452195688Z\",\"lastScrapeDuration\":0.401500015,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:6443\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_endpointslice_kubernetes_io_skip_mirror\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubernetes\",\"__meta_kubernetes_namespace\":\"default\",\"__meta_kubernetes_service_label_component\":\"apiserver\",\"__meta_kubernetes_service_label_provider\":\"kubernetes\",\"__meta_kubernetes_service_labelpresent_component\":\"true\",\"__meta_kubernetes_service_labelpresent_provider\":\"true\",\"__meta_kubernetes_service_name\":\"kubernetes\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\"},\"labels\":{\"apiserver\":\"kube-apiserver\",\"endpoint\":\"https\",\"instance\":\"10.196.3.187:6443\",\"job\":\"apiserver\",\"namespace\":\"default\",\"service\":\"kubernetes\"},\"scrapePool\":\"serviceMonitor/openshift-kube-apiserver/kube-apiserver/0\",\"scrapeUrl\":\"https://10.196.3.187:6443/metrics\",\"globalUrl\":\"https://10.196.3.187:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:30.687675426Z\",\"lastScrapeDuration\":0.436359264,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.25.14:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-controller-manager-operator-7b9f4f4cdf-4n52n\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kube-controller-manager-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.25.14\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e2:50:e1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.25.14\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e2:50:e1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-controller-manager-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-controller-manager-operator-7b9f4f4cdf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.25.14\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7b9f4f4cdf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-operator-7b9f4f4cdf-4n52n\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f093eaa7-c949-484c-830a-8e29e64deb7b\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-controller-manager-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"kube-controller-manager-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0\"},\"labels\":{\"container\":\"kube-controller-manager-operator\",\"endpoint\":\"https\",\"instance\":\"10.128.25.14:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-kube-controller-manager-operator\",\"pod\":\"kube-controller-manager-operator-7b9f4f4cdf-4n52n\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0\",\"scrapeUrl\":\"https://10.128.25.14:8443/metrics\",\"globalUrl\":\"https://10.128.25.14:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.114642782Z\",\"lastScrapeDuration\":0.022422876,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10257\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-controller-manager-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"30cc4fad-2707-49ca-8af4-654dfe7049f2\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"30cc4fad-2707-49ca-8af4-654dfe7049f2\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:01.957733716Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"10257\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"9fe004e7-c0d0-4b1a-bc98-e115973fe308\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"},\"labels\":{\"container\":\"kube-controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:10257\",\"job\":\"kube-controller-manager\",\"namespace\":\"openshift-kube-controller-manager\",\"pod\":\"kube-controller-manager-ostest-n5rnf-master-0\",\"service\":\"kube-controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\",\"scrapeUrl\":\"https://10.196.0.105:10257/metrics\",\"globalUrl\":\"https://10.196.0.105:10257/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.528167286Z\",\"lastScrapeDuration\":0.519380403,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10257\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-controller-manager-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"4d079c6f-40c7-4c4b-9915-95bfdc4d90bf\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"4d079c6f-40c7-4c4b-9915-95bfdc4d90bf\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:50.144170849Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"10257\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"dafaafdf-d6ab-43af-a3b8-182083a9c825\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"},\"labels\":{\"container\":\"kube-controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.196.3.178:10257\",\"job\":\"kube-controller-manager\",\"namespace\":\"openshift-kube-controller-manager\",\"pod\":\"kube-controller-manager-ostest-n5rnf-master-1\",\"service\":\"kube-controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\",\"scrapeUrl\":\"https://10.196.3.178:10257/metrics\",\"globalUrl\":\"https://10.196.3.178:10257/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.775634904Z\",\"lastScrapeDuration\":0.038075955,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10257\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-controller-manager-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"8673eaec-7022-428b-9556-52d3f1ba194f\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"8673eaec-7022-428b-9556-52d3f1ba194f\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:15.460702568Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-controller-manager\",\"__meta_kubernetes_pod_container_port_number\":\"10257\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e6e98f52-d119-440e-88f0-02ce9237fa4d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"},\"labels\":{\"container\":\"kube-controller-manager\",\"endpoint\":\"https\",\"instance\":\"10.196.3.187:10257\",\"job\":\"kube-controller-manager\",\"namespace\":\"openshift-kube-controller-manager\",\"pod\":\"kube-controller-manager-ostest-n5rnf-master-2\",\"service\":\"kube-controller-manager\"},\"scrapePool\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\",\"scrapeUrl\":\"https://10.196.3.187:10257/metrics\",\"globalUrl\":\"https://10.196.3.187:10257/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.007468609Z\",\"lastScrapeDuration\":0.013537433,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.12.37:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-operator-66c644698-767c2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openshift-kube-scheduler-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.12.37\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d0:96:94\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.12.37\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d0:96:94\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-kube-scheduler-operator-66c644698\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.12.37\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"66c644698\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-operator-66c644698-767c2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ffbb21d7-6360-4aa3-9f64-a6c9c169318d\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"kube-scheduler-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openshift-kube-scheduler-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0\"},\"labels\":{\"endpoint\":\"https\",\"instance\":\"10.128.12.37:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-kube-scheduler-operator\",\"pod\":\"openshift-kube-scheduler-operator-66c644698-767c2\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0\",\"scrapeUrl\":\"https://10.128.12.37:8443/metrics\",\"globalUrl\":\"https://10.128.12.37:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:37.604162472Z\",\"lastScrapeDuration\":0.016354755,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"1867b8bd-c706-476a-9511-936fdd6139d6\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"1867b8bd-c706-476a-9511-936fdd6139d6\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:16.955822852Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1bcaee97-1a38-4283-9a2d-41e514e74562\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\",\"scrapeUrl\":\"https://10.196.0.105:10259/metrics\",\"globalUrl\":\"https://10.196.0.105:10259/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.515431376Z\",\"lastScrapeDuration\":0.03260175,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"83bc82d7-6403-4f3e-aa8f-8e945f447d1e\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"83bc82d7-6403-4f3e-aa8f-8e945f447d1e\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:16.042484581Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1139f840-9de9-4ce6-a949-4acc83331b22\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.3.178:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\",\"scrapeUrl\":\"https://10.196.3.178:10259/metrics\",\"globalUrl\":\"https://10.196.3.178:10259/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:16.769490584Z\",\"lastScrapeDuration\":0.033924075,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"9d7a833b-10ce-49d4-9b73-999cbb8f381c\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"9d7a833b-10ce-49d4-9b73-999cbb8f381c\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:04.640756071Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"356d4529-8a6a-4a65-a827-a2e6bdcefa33\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.3.187:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/0\",\"scrapeUrl\":\"https://10.196.3.187:10259/metrics\",\"globalUrl\":\"https://10.196.3.187:10259/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.632645378Z\",\"lastScrapeDuration\":0.024249241,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"1867b8bd-c706-476a-9511-936fdd6139d6\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"1867b8bd-c706-476a-9511-936fdd6139d6\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:16.955822852Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1bcaee97-1a38-4283-9a2d-41e514e74562\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics/resources\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.0.105:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-0\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\",\"scrapeUrl\":\"https://10.196.0.105:10259/metrics/resources\",\"globalUrl\":\"https://10.196.0.105:10259/metrics/resources\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.46566805Z\",\"lastScrapeDuration\":0.029085268,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"83bc82d7-6403-4f3e-aa8f-8e945f447d1e\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"83bc82d7-6403-4f3e-aa8f-8e945f447d1e\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:16.042484581Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1139f840-9de9-4ce6-a949-4acc83331b22\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics/resources\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.3.178:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-1\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\",\"scrapeUrl\":\"https://10.196.3.178:10259/metrics/resources\",\"globalUrl\":\"https://10.196.3.178:10259/metrics/resources\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:27.419146966Z\",\"lastScrapeDuration\":0.020658887,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10259\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"scheduler\",\"__meta_kubernetes_namespace\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-scheduler\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"9d7a833b-10ce-49d4-9b73-999cbb8f381c\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"9d7a833b-10ce-49d4-9b73-999cbb8f381c\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:04.640756071Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-scheduler\",\"__meta_kubernetes_pod_container_port_number\":\"10259\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"openshift-kube-scheduler\",\"__meta_kubernetes_pod_label_revision\":\"12\",\"__meta_kubernetes_pod_label_scheduler\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_labelpresent_scheduler\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"356d4529-8a6a-4a65-a827-a2e6bdcefa33\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"scheduler\",\"__metrics_path__\":\"/metrics/resources\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\"},\"labels\":{\"container\":\"kube-scheduler\",\"endpoint\":\"https\",\"instance\":\"10.196.3.187:10259\",\"job\":\"scheduler\",\"namespace\":\"openshift-kube-scheduler\",\"pod\":\"openshift-kube-scheduler-ostest-n5rnf-master-2\",\"service\":\"scheduler\"},\"scrapePool\":\"serviceMonitor/openshift-kube-scheduler/kube-scheduler/1\",\"scrapeUrl\":\"https://10.196.3.187:10259/metrics/resources\",\"globalUrl\":\"https://10.196.3.187:10259/metrics/resources\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:20.196234933Z\",\"lastScrapeDuration\":0.006441924,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-cjcgk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-cjcgk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"bbdf1c26-e361-4015-9404-a307c40d0734\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.105:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-cjcgk\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.0.105:9655/metrics\",\"globalUrl\":\"http://10.196.0.105:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:14.807987556Z\",\"lastScrapeDuration\":0.015268035,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-xzbzv\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-xzbzv\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"false\",\"__meta_kubernetes_pod_uid\":\"9a46eb61-8782-4c26-9e89-8fef6e4a33e9\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.199:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-xzbzv\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.0.199:9655/metrics\",\"globalUrl\":\"http://10.196.0.199:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.235107026Z\",\"lastScrapeDuration\":0.004634639,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-crfvc\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-crfvc\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"false\",\"__meta_kubernetes_pod_uid\":\"de39c947-6203-413a-aa51-b069776af721\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.169:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-crfvc\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.2.169:9655/metrics\",\"globalUrl\":\"http://10.196.2.169:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.747325582Z\",\"lastScrapeDuration\":0.005496352,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-2rrvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-2rrvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e6e1bace-f2ff-419b-9206-323d49ce67ec\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.72:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-2rrvs\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.2.72:9655/metrics\",\"globalUrl\":\"http://10.196.2.72:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:23.686478061Z\",\"lastScrapeDuration\":0.004933796,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-ndzt5\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-ndzt5\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5497497a-dd9f-464c-a031-1af7c8a3123c\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.178:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-ndzt5\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.3.178:9655/metrics\",\"globalUrl\":\"http://10.196.3.178:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.377750451Z\",\"lastScrapeDuration\":0.016292805,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-t448w\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-t448w\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"568d2b5d-b1f3-4810-8ef5-058a27e6266a\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"},\"labels\":{\"container\":\"kuryr-cni\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.187:9655\",\"job\":\"kuryr-cni\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-cni-t448w\",\"service\":\"kuryr-cni\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\",\"scrapeUrl\":\"http://10.196.3.187:9655/metrics\",\"globalUrl\":\"http://10.196.3.187:9655/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.317261456Z\",\"lastScrapeDuration\":0.055843776,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9654\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-controller-7654df4d98-f2qvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-controller\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9654\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-controller-7654df4d98\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kuryr-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7654df4d98\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-controller-7654df4d98-f2qvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2543a36c-08af-4a31-9ae6-f0cb7c99a745\",\"__meta_kubernetes_service_label_app\":\"kuryr-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"},\"labels\":{\"container\":\"controller\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.178:9654\",\"job\":\"kuryr-controller\",\"namespace\":\"openshift-kuryr\",\"pod\":\"kuryr-controller-7654df4d98-f2qvz\",\"service\":\"kuryr-controller\"},\"scrapePool\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\",\"scrapeUrl\":\"http://10.196.3.178:9654/metrics\",\"globalUrl\":\"http://10.196.3.178:9654/metrics\",\"lastError\":\"Get \\\"http://10.196.3.178:9654/metrics\\\": context deadline exceeded\",\"lastScrape\":\"2022-10-13T10:18:24.918549909Z\",\"lastScrapeDuration\":30.00047891,\"health\":\"down\"},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.45.39:9192\",\"job\":\"cluster-autoscaler-operator\",\"namespace\":\"openshift-machine-api\",\"pod\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"service\":\"cluster-autoscaler-operator\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\",\"scrapeUrl\":\"https://10.128.45.39:9192/metrics\",\"globalUrl\":\"https://10.128.45.39:9192/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.891552226Z\",\"lastScrapeDuration\":0.025146096,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-machine-mtrc\",\"endpoint\":\"machine-mtrc\",\"instance\":\"10.128.44.154:8441\",\"job\":\"machine-api-controllers\",\"namespace\":\"openshift-machine-api\",\"pod\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"service\":\"machine-api-controllers\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\",\"scrapeUrl\":\"https://10.128.44.154:8441/metrics\",\"globalUrl\":\"https://10.128.44.154:8441/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.194383994Z\",\"lastScrapeDuration\":0.020119044,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"},\"labels\":{\"container\":\"kube-rbac-proxy-machineset-mtrc\",\"endpoint\":\"machineset-mtrc\",\"instance\":\"10.128.44.154:8442\",\"job\":\"machine-api-controllers\",\"namespace\":\"openshift-machine-api\",\"pod\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"service\":\"machine-api-controllers\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\",\"scrapeUrl\":\"https://10.128.44.154:8442/metrics\",\"globalUrl\":\"https://10.128.44.154:8442/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.904432709Z\",\"lastScrapeDuration\":0.023808989,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"},\"labels\":{\"container\":\"kube-rbac-proxy-mhc-mtrc\",\"endpoint\":\"mhc-mtrc\",\"instance\":\"10.128.44.154:8444\",\"job\":\"machine-api-controllers\",\"namespace\":\"openshift-machine-api\",\"pod\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"service\":\"machine-api-controllers\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\",\"scrapeUrl\":\"https://10.128.44.154:8444/metrics\",\"globalUrl\":\"https://10.128.44.154:8444/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:30.228431216Z\",\"lastScrapeDuration\":0.015670893,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.44.42:8443\",\"job\":\"machine-api-operator\",\"namespace\":\"openshift-machine-api\",\"pod\":\"machine-api-operator-74b9f87587-s6jf2\",\"service\":\"machine-api-operator\"},\"scrapePool\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\",\"scrapeUrl\":\"https://10.128.44.42:8443/metrics\",\"globalUrl\":\"https://10.128.44.42:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:08.552830794Z\",\"lastScrapeDuration\":0.019565875,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-7nbkb\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-7nbkb\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"23bebf09-fce1-46a3-ab7d-9f2c6be459cf\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.187:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-master-2\",\"pod\":\"machine-config-daemon-7nbkb\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.3.187:9001/metrics\",\"globalUrl\":\"https://10.196.3.187:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.627115784Z\",\"lastScrapeDuration\":0.012285439,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-s42r2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-s42r2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ca09e5cb-456f-4900-a4a4-da8699d8ea6d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.105:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-master-0\",\"pod\":\"machine-config-daemon-s42r2\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.0.105:9001/metrics\",\"globalUrl\":\"https://10.196.0.105:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:14.269681827Z\",\"lastScrapeDuration\":0.013502674,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-twth5\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-twth5\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d445393-db4d-4b75-b45d-05c4248a66e7\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.0.199:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"pod\":\"machine-config-daemon-twth5\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.0.199:9001/metrics\",\"globalUrl\":\"https://10.196.0.199:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.18878583Z\",\"lastScrapeDuration\":0.004698552,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-hmq85\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-hmq85\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"11ee5a22-7c69-4d1f-a773-71b0d48e28f1\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.169:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"pod\":\"machine-config-daemon-hmq85\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.2.169:9001/metrics\",\"globalUrl\":\"https://10.196.2.169:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.55262142Z\",\"lastScrapeDuration\":0.004336073,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-rrg8p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-rrg8p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b094f84e-7c68-4df8-ab47-e0e40d515b76\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.2.72:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"pod\":\"machine-config-daemon-rrg8p\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.2.72:9001/metrics\",\"globalUrl\":\"https://10.196.2.72:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:12.694720207Z\",\"lastScrapeDuration\":0.016292188,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9001\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-config-daemon-kc9g6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-config-daemon\",\"__meta_kubernetes_namespace\":\"openshift-machine-config-operator\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-config-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"5bb8b444bb\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-config-daemon-kc9g6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"655a3677-59eb-4cd3-811e-ecad4da2edc1\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"proxy-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-config-daemon\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-config-daemon\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.196.3.178:9001\",\"job\":\"machine-config-daemon\",\"namespace\":\"openshift-machine-config-operator\",\"node\":\"ostest-n5rnf-master-1\",\"pod\":\"machine-config-daemon-kc9g6\",\"service\":\"machine-config-daemon\"},\"scrapePool\":\"serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0\",\"scrapeUrl\":\"https://10.196.3.178:9001/metrics\",\"globalUrl\":\"https://10.196.3.178:9001/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.84057596Z\",\"lastScrapeDuration\":0.010546749,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.79.141:8081\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"marketplace-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"marketplace-operator-79fb778f6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.79.141\",\"__meta_kubernetes_pod_label_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79fb778f6b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b3bba0b4-92e7-461f-abff-61fc1b5cd349\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"marketplace-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"marketplace-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.128.79.141:8081\",\"job\":\"marketplace-operator-metrics\",\"namespace\":\"openshift-marketplace\",\"pod\":\"marketplace-operator-79fb778f6b-qc8zr\",\"service\":\"marketplace-operator-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\",\"scrapeUrl\":\"https://10.128.79.141:8081/metrics\",\"globalUrl\":\"https://10.128.79.141:8081/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.446273311Z\",\"lastScrapeDuration\":0.006143945,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"},\"labels\":{\"container\":\"alertmanager-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.22.112:9095\",\"job\":\"alertmanager-main\",\"namespace\":\"openshift-monitoring\",\"pod\":\"alertmanager-main-1\",\"service\":\"alertmanager-main\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/alertmanager/0\",\"scrapeUrl\":\"https://10.128.22.112:9095/metrics\",\"globalUrl\":\"https://10.128.22.112:9095/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.422413698Z\",\"lastScrapeDuration\":0.012617696,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"},\"labels\":{\"container\":\"alertmanager-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.138:9095\",\"job\":\"alertmanager-main\",\"namespace\":\"openshift-monitoring\",\"pod\":\"alertmanager-main-2\",\"service\":\"alertmanager-main\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/alertmanager/0\",\"scrapeUrl\":\"https://10.128.23.138:9095/metrics\",\"globalUrl\":\"https://10.128.23.138:9095/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:10.949991298Z\",\"lastScrapeDuration\":0.024768801,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"},\"labels\":{\"container\":\"alertmanager-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.161:9095\",\"job\":\"alertmanager-main\",\"namespace\":\"openshift-monitoring\",\"pod\":\"alertmanager-main-0\",\"service\":\"alertmanager-main\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/alertmanager/0\",\"scrapeUrl\":\"https://10.128.23.161:9095/metrics\",\"globalUrl\":\"https://10.128.23.161:9095/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.106133479Z\",\"lastScrapeDuration\":0.017946279,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.23.49:8443\",\"job\":\"cluster-monitoring-operator\",\"namespace\":\"openshift-monitoring\",\"pod\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"service\":\"cluster-monitoring-operator\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\",\"scrapeUrl\":\"https://10.128.23.49:8443/metrics\",\"globalUrl\":\"https://10.128.23.49:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:21.672271645Z\",\"lastScrapeDuration\":0.009641354,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9979\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"etcd-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:28:22.756939605Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"742f6dc2-47a0-41cc-b0a9-13e66d83f057\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"},\"labels\":{\"endpoint\":\"etcd-metrics\",\"instance\":\"10.196.0.105:9979\",\"job\":\"etcd\",\"namespace\":\"openshift-etcd\",\"pod\":\"etcd-ostest-n5rnf-master-0\",\"service\":\"etcd\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/etcd/0\",\"scrapeUrl\":\"https://10.196.0.105:9979/metrics\",\"globalUrl\":\"https://10.196.0.105:9979/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.905482827Z\",\"lastScrapeDuration\":0.053485266,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9979\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"etcd-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:56.640481859Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6891d70c-a3ec-4d90-b283-d4abf49382d3\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"},\"labels\":{\"endpoint\":\"etcd-metrics\",\"instance\":\"10.196.3.178:9979\",\"job\":\"etcd\",\"namespace\":\"openshift-etcd\",\"pod\":\"etcd-ostest-n5rnf-master-1\",\"service\":\"etcd\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/etcd/0\",\"scrapeUrl\":\"https://10.196.3.178:9979/metrics\",\"globalUrl\":\"https://10.196.3.178:9979/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.53801222Z\",\"lastScrapeDuration\":0.046140542,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9979\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"etcd-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:36.245067150Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"49572518-4248-4dc2-8392-e8298ad9706c\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"},\"labels\":{\"endpoint\":\"etcd-metrics\",\"instance\":\"10.196.3.187:9979\",\"job\":\"etcd\",\"namespace\":\"openshift-etcd\",\"pod\":\"etcd-ostest-n5rnf-master-2\",\"service\":\"etcd\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/etcd/0\",\"scrapeUrl\":\"https://10.196.3.187:9979/metrics\",\"globalUrl\":\"https://10.196.3.187:9979/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.571197242Z\",\"lastScrapeDuration\":0.073158888,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"},\"labels\":{\"container\":\"grafana-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.22.230:3000\",\"job\":\"grafana\",\"namespace\":\"openshift-monitoring\",\"pod\":\"grafana-7c5c5fb5b6-cht4p\",\"service\":\"grafana\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/grafana/0\",\"scrapeUrl\":\"https://10.128.22.230:3000/metrics\",\"globalUrl\":\"https://10.128.22.230:3000/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.211282031Z\",\"lastScrapeDuration\":0.014224974,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-main\",\"endpoint\":\"https-main\",\"instance\":\"10.128.22.45:8443\",\"job\":\"kube-state-metrics\",\"namespace\":\"openshift-monitoring\",\"service\":\"kube-state-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\",\"scrapeUrl\":\"https://10.128.22.45:8443/metrics\",\"globalUrl\":\"https://10.128.22.45:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.162133437Z\",\"lastScrapeDuration\":0.10426944,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"},\"labels\":{\"container\":\"kube-rbac-proxy-self\",\"endpoint\":\"https-self\",\"instance\":\"10.128.22.45:9443\",\"job\":\"kube-state-metrics\",\"namespace\":\"openshift-monitoring\",\"pod\":\"kube-state-metrics-754df74859-w8k5h\",\"service\":\"kube-state-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\",\"scrapeUrl\":\"https://10.128.22.45:9443/metrics\",\"globalUrl\":\"https://10.128.22.45:9443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:23.054084271Z\",\"lastScrapeDuration\":0.006612329,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.105:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-0\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.0.105:10250/metrics\",\"globalUrl\":\"https://10.196.0.105:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.857908889Z\",\"lastScrapeDuration\":0.098449187,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.178:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-1\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.3.178:10250/metrics\",\"globalUrl\":\"https://10.196.3.178:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:10.364120808Z\",\"lastScrapeDuration\":0.049145813,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.187:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-2\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.3.187:10250/metrics\",\"globalUrl\":\"https://10.196.3.187:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.430867893Z\",\"lastScrapeDuration\":0.128554348,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.72:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.2.72:10250/metrics\",\"globalUrl\":\"https://10.196.2.72:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.80061983Z\",\"lastScrapeDuration\":0.064200147,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.169:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.2.169:10250/metrics\",\"globalUrl\":\"https://10.196.2.169:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.435691052Z\",\"lastScrapeDuration\":6.756325893,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.199:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/0\",\"scrapeUrl\":\"https://10.196.0.199:10250/metrics\",\"globalUrl\":\"https://10.196.0.199:10250/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:10.017521429Z\",\"lastScrapeDuration\":0.15388102,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.105:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-0\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.0.105:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.0.105:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.882904975Z\",\"lastScrapeDuration\":1.088934562,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.178:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-1\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.3.178:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.3.178:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:07.513564555Z\",\"lastScrapeDuration\":1.739874066,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.187:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-2\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.3.187:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.3.187:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:11.434396679Z\",\"lastScrapeDuration\":1.785947507,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.72:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.2.72:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.2.72:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:11.898295528Z\",\"lastScrapeDuration\":0.463552284,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.169:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.2.169:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.2.169:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:12.553753786Z\",\"lastScrapeDuration\":0.536215622,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.199:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/cadvisor\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/1\",\"scrapeUrl\":\"https://10.196.0.199:10250/metrics/cadvisor\",\"globalUrl\":\"https://10.196.0.199:10250/metrics/cadvisor\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.197167099Z\",\"lastScrapeDuration\":0.515055528,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.105:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-0\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.0.105:10250/metrics/probes\",\"globalUrl\":\"https://10.196.0.105:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:30.57273602Z\",\"lastScrapeDuration\":0.002274152,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.178:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-1\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.3.178:10250/metrics/probes\",\"globalUrl\":\"https://10.196.3.178:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.38781671Z\",\"lastScrapeDuration\":0.003897559,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.3.187:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-2\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.3.187:10250/metrics/probes\",\"globalUrl\":\"https://10.196.3.187:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.911603476Z\",\"lastScrapeDuration\":0.002449197,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.72:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.2.72:10250/metrics/probes\",\"globalUrl\":\"https://10.196.2.72:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.708813915Z\",\"lastScrapeDuration\":0.00350919,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.2.169:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.2.169:10250/metrics/probes\",\"globalUrl\":\"https://10.196.2.169:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:14.680926571Z\",\"lastScrapeDuration\":0.002269746,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"},\"labels\":{\"endpoint\":\"https-metrics\",\"instance\":\"10.196.0.199:10250\",\"job\":\"kubelet\",\"metrics_path\":\"/metrics/probes\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/2\",\"scrapeUrl\":\"https://10.196.0.199:10250/metrics/probes\",\"globalUrl\":\"https://10.196.0.199:10250/metrics/probes\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:26.488814277Z\",\"lastScrapeDuration\":0.002607716,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.0.105:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-0\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.0.105:9537/metrics\",\"globalUrl\":\"http://10.196.0.105:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.212484511Z\",\"lastScrapeDuration\":0.006502641,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.3.178:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-1\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.3.178:9537/metrics\",\"globalUrl\":\"http://10.196.3.178:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:35.88625022Z\",\"lastScrapeDuration\":0.005996596,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.3.187:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-master-2\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.3.187:9537/metrics\",\"globalUrl\":\"http://10.196.3.187:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:07.861087393Z\",\"lastScrapeDuration\":0.006478659,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.2.72:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-8kq82\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.2.72:9537/metrics\",\"globalUrl\":\"http://10.196.2.72:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.129390454Z\",\"lastScrapeDuration\":0.007325069,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.2.169:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-94fxs\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.2.169:9537/metrics\",\"globalUrl\":\"http://10.196.2.169:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.845647523Z\",\"lastScrapeDuration\":0.006217764,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10250\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"},\"labels\":{\"endpoint\":\"crio\",\"instance\":\"10.196.0.199:9537\",\"job\":\"crio\",\"namespace\":\"kube-system\",\"node\":\"ostest-n5rnf-worker-0-j4pkp\",\"service\":\"kubelet\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/kubelet/3\",\"scrapeUrl\":\"http://10.196.0.199:9537/metrics\",\"globalUrl\":\"http://10.196.0.199:9537/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.695366205Z\",\"lastScrapeDuration\":0.00543279,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-master-0\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-p5vmg\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.0.105:9100/metrics\",\"globalUrl\":\"https://10.196.0.105:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.169027835Z\",\"lastScrapeDuration\":0.107931286,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-worker-0-j4pkp\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-7cn6l\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.0.199:9100/metrics\",\"globalUrl\":\"https://10.196.0.199:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.820630372Z\",\"lastScrapeDuration\":0.02903944,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-worker-0-94fxs\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-fvjvs\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.2.169:9100/metrics\",\"globalUrl\":\"https://10.196.2.169:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.884564822Z\",\"lastScrapeDuration\":0.028085904,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-worker-0-8kq82\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-7n85z\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.2.72:9100/metrics\",\"globalUrl\":\"https://10.196.2.72:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:24.691928253Z\",\"lastScrapeDuration\":0.023609318,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-master-1\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-dlzvz\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.3.178:9100/metrics\",\"globalUrl\":\"https://10.196.3.178:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:33.877484535Z\",\"lastScrapeDuration\":0.064567261,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"ostest-n5rnf-master-2\",\"job\":\"node-exporter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"node-exporter-g96tz\",\"service\":\"node-exporter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/node-exporter/0\",\"scrapeUrl\":\"https://10.196.3.187:9100/metrics\",\"globalUrl\":\"https://10.196.3.187:9100/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.169320021Z\",\"lastScrapeDuration\":0.129569916,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-main\",\"endpoint\":\"https-main\",\"instance\":\"10.128.22.89:8443\",\"job\":\"openshift-state-metrics\",\"namespace\":\"openshift-monitoring\",\"pod\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"service\":\"openshift-state-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\",\"scrapeUrl\":\"https://10.128.22.89:8443/metrics\",\"globalUrl\":\"https://10.128.22.89:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:18:08.638710192Z\",\"lastScrapeDuration\":0.004114451,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"},\"labels\":{\"container\":\"kube-rbac-proxy-self\",\"endpoint\":\"https-self\",\"instance\":\"10.128.22.89:9443\",\"job\":\"openshift-state-metrics\",\"namespace\":\"openshift-monitoring\",\"pod\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"service\":\"openshift-state-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\",\"scrapeUrl\":\"https://10.128.22.89:9443/metrics\",\"globalUrl\":\"https://10.128.22.89:9443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:27.707756506Z\",\"lastScrapeDuration\":0.004276215,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"},\"labels\":{\"container\":\"prometheus-adapter\",\"endpoint\":\"https\",\"instance\":\"10.128.23.77:6443\",\"job\":\"prometheus-adapter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-adapter-86cfd468f7-blrxn\",\"service\":\"prometheus-adapter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\",\"scrapeUrl\":\"https://10.128.23.77:6443/metrics\",\"globalUrl\":\"https://10.128.23.77:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:16.892674609Z\",\"lastScrapeDuration\":0.018140589,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"},\"labels\":{\"container\":\"prometheus-adapter\",\"endpoint\":\"https\",\"instance\":\"10.128.23.82:6443\",\"job\":\"prometheus-adapter\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"service\":\"prometheus-adapter\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\",\"scrapeUrl\":\"https://10.128.23.82:6443/metrics\",\"globalUrl\":\"https://10.128.23.82:6443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:13.106615906Z\",\"lastScrapeDuration\":0.018610834,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"},\"labels\":{\"container\":\"prometheus-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.18:9091\",\"job\":\"prometheus-k8s\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-k8s-0\",\"service\":\"prometheus-k8s\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\",\"scrapeUrl\":\"https://10.128.23.18:9091/metrics\",\"globalUrl\":\"https://10.128.23.18:9091/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.006329147Z\",\"lastScrapeDuration\":0.035074093,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"},\"labels\":{\"container\":\"prometheus-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.35:9091\",\"job\":\"prometheus-k8s\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-k8s-1\",\"service\":\"prometheus-k8s\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\",\"scrapeUrl\":\"https://10.128.23.35:9091/metrics\",\"globalUrl\":\"https://10.128.23.35:9091/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:10.554808033Z\",\"lastScrapeDuration\":0.032665281,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.22.177:8443\",\"job\":\"prometheus-operator\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"service\":\"prometheus-operator\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\",\"scrapeUrl\":\"https://10.128.22.177:8443/metrics\",\"globalUrl\":\"https://10.128.22.177:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:17.019364187Z\",\"lastScrapeDuration\":0.013974122,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"https\",\"instance\":\"10.128.22.239:8443\",\"job\":\"telemeter-client\",\"namespace\":\"openshift-monitoring\",\"pod\":\"telemeter-client-6d8969b4bf-dffrt\",\"service\":\"telemeter-client\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\",\"scrapeUrl\":\"https://10.128.22.239:8443/metrics\",\"globalUrl\":\"https://10.128.22.239:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:15.31457454Z\",\"lastScrapeDuration\":0.004959401,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.114:9091\",\"job\":\"thanos-querier\",\"namespace\":\"openshift-monitoring\",\"pod\":\"thanos-querier-6699db6d95-cvbzq\",\"service\":\"thanos-querier\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\",\"scrapeUrl\":\"https://10.128.23.114:9091/metrics\",\"globalUrl\":\"https://10.128.23.114:9091/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.006530509Z\",\"lastScrapeDuration\":0.011510362,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"},\"labels\":{\"container\":\"oauth-proxy\",\"endpoint\":\"web\",\"instance\":\"10.128.23.183:9091\",\"job\":\"thanos-querier\",\"namespace\":\"openshift-monitoring\",\"pod\":\"thanos-querier-6699db6d95-42mpw\",\"service\":\"thanos-querier\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\",\"scrapeUrl\":\"https://10.128.23.183:9091/metrics\",\"globalUrl\":\"https://10.128.23.183:9091/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.296367017Z\",\"lastScrapeDuration\":0.02241396,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-thanos\",\"endpoint\":\"thanos-proxy\",\"instance\":\"10.128.23.35:10902\",\"job\":\"prometheus-k8s-thanos-sidecar\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-k8s-1\",\"service\":\"prometheus-k8s-thanos-sidecar\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\",\"scrapeUrl\":\"https://10.128.23.35:10902/metrics\",\"globalUrl\":\"https://10.128.23.35:10902/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.988989234Z\",\"lastScrapeDuration\":0.007803824,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"},\"labels\":{\"container\":\"kube-rbac-proxy-thanos\",\"endpoint\":\"thanos-proxy\",\"instance\":\"10.128.23.18:10902\",\"job\":\"prometheus-k8s-thanos-sidecar\",\"namespace\":\"openshift-monitoring\",\"pod\":\"prometheus-k8s-0\",\"service\":\"prometheus-k8s-thanos-sidecar\"},\"scrapePool\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\",\"scrapeUrl\":\"https://10.128.23.18:10902/metrics\",\"globalUrl\":\"https://10.128.23.18:10902/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:18.866499737Z\",\"lastScrapeDuration\":0.006439213,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.19:8443\",\"job\":\"multus-admission-controller\",\"namespace\":\"openshift-multus\",\"pod\":\"multus-admission-controller-flt6k\",\"service\":\"multus-admission-controller\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\",\"scrapeUrl\":\"https://10.128.34.19:8443/metrics\",\"globalUrl\":\"https://10.128.34.19:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:32.76701535Z\",\"lastScrapeDuration\":0.00939373,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.23:8443\",\"job\":\"multus-admission-controller\",\"namespace\":\"openshift-multus\",\"pod\":\"multus-admission-controller-xj8rp\",\"service\":\"multus-admission-controller\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\",\"scrapeUrl\":\"https://10.128.34.23:8443/metrics\",\"globalUrl\":\"https://10.128.34.23:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:31.884037882Z\",\"lastScrapeDuration\":0.009206721,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.59:8443\",\"job\":\"multus-admission-controller\",\"namespace\":\"openshift-multus\",\"pod\":\"multus-admission-controller-pprg6\",\"service\":\"multus-admission-controller\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\",\"scrapeUrl\":\"https://10.128.34.59:8443/metrics\",\"globalUrl\":\"https://10.128.34.59:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:12.649242685Z\",\"lastScrapeDuration\":0.003407487,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.62:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-98jr8\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.62\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:4d:80:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.62\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:4d:80:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.62\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-98jr8\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b9e25138-56b7-4086-b0d8-bbfad8d59d29\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.62:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-98jr8\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.34.62:8443/metrics\",\"globalUrl\":\"https://10.128.34.62:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.029317843Z\",\"lastScrapeDuration\":0.011697606,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.92:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-xh8kk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.92\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:94:47\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.92\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:94:47\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.92\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-xh8kk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"78e54083-207a-4a1d-9ac3-1e61e4c3a94d\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.92:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-xh8kk\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.34.92:8443/metrics\",\"globalUrl\":\"https://10.128.34.92:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.19888255Z\",\"lastScrapeDuration\":0.004268387,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.35.157:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-9vnl8\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.35.157\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:80:04:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.35.157\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:80:04:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.35.157\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-9vnl8\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"eab7a941-acc9-4f7a-9e27-bfda6efdc8b7\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.35.157:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-9vnl8\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.35.157:8443/metrics\",\"globalUrl\":\"https://10.128.35.157:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:29.526186331Z\",\"lastScrapeDuration\":0.002245776,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.35.46:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-6p764\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.35.46\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:21:c6:58\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.35.46\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:21:c6:58\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.35.46\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-6p764\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f1a5dd1f-c96d-435e-a2c2-414ef30007b0\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.35.46:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-6p764\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.35.46:8443/metrics\",\"globalUrl\":\"https://10.128.35.46:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:28.671457298Z\",\"lastScrapeDuration\":0.002166266,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.135:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-mmmtp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.135\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:0f:7c:01\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.135\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:0f:7c:01\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.135\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-mmmtp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"3e837b28-47f3-449c-a549-2f35716eadac\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.135:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-mmmtp\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.34.135:8443/metrics\",\"globalUrl\":\"https://10.128.34.135:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:36.885591077Z\",\"lastScrapeDuration\":0.00666558,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.34.247:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-rwwwz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.247\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ad:57:02\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.247\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ad:57:02\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.34.247\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-rwwwz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5cc84773-7d05-45e6-9e0e-c1d785d19d6f\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"},\"labels\":{\"container\":\"kube-rbac-proxy\",\"endpoint\":\"metrics\",\"instance\":\"10.128.34.247:8443\",\"job\":\"network-metrics-service\",\"namespace\":\"openshift-multus\",\"pod\":\"network-metrics-daemon-rwwwz\",\"service\":\"network-metrics-service\"},\"scrapePool\":\"serviceMonitor/openshift-multus/monitor-network/0\",\"scrapeUrl\":\"https://10.128.34.247:8443/metrics\",\"globalUrl\":\"https://10.128.34.247:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:34.583768924Z\",\"lastScrapeDuration\":0.004258343,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.103.204:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-source-84dfc9ddb-46tsr\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"network-check-source\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-source\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.204\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5f:a0:61\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.204\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5f:a0:61\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-source-84dfc9ddb\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.103.204\",\"__meta_kubernetes_pod_label_app\":\"network-check-source\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"84dfc9ddb\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-source-84dfc9ddb-46tsr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"750fdda1-ded7-4131-9bd7-f42602a669d4\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_label_app\":\"network-check-source\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"network-check-source\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"},\"labels\":{\"container\":\"check-endpoints\",\"endpoint\":\"check-endpoints\",\"instance\":\"10.128.103.204:17698\",\"job\":\"network-check-source\",\"namespace\":\"openshift-network-diagnostics\",\"pod\":\"network-check-source-84dfc9ddb-46tsr\",\"service\":\"network-check-source\"},\"scrapePool\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\",\"scrapeUrl\":\"https://10.128.103.204:17698/metrics\",\"globalUrl\":\"https://10.128.103.204:17698/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:09.779419396Z\",\"lastScrapeDuration\":0.013312408,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.93.117:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"catalog-operator-7c7d96d8d6-bfvts\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"catalog-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"catalog-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.117\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:8b:73\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.117\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:8b:73\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"catalog-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"catalog-operator-7c7d96d8d6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.93.117\",\"__meta_kubernetes_pod_label_app\":\"catalog-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c7d96d8d6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"catalog-operator-7c7d96d8d6-bfvts\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"245bde86-6823-4aaf-9b27-aaad0428d6f6\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"catalog-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"catalog-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"catalog-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\"},\"labels\":{\"container\":\"catalog-operator\",\"endpoint\":\"https-metrics\",\"instance\":\"10.128.93.117:8443\",\"job\":\"catalog-operator-metrics\",\"namespace\":\"openshift-operator-lifecycle-manager\",\"pod\":\"catalog-operator-7c7d96d8d6-bfvts\",\"service\":\"catalog-operator-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\",\"scrapeUrl\":\"https://10.128.93.117:8443/metrics\",\"globalUrl\":\"https://10.128.93.117:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:19.31223398Z\",\"lastScrapeDuration\":0.006748421,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.92.123:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"olm-operator-56f75d4687-pdzb6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"olm-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"olm-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.92.123\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:08:05:71\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.92.123\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:08:05:71\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"olm-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"olm-operator-56f75d4687\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.92.123\",\"__meta_kubernetes_pod_label_app\":\"olm-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"56f75d4687\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"olm-operator-56f75d4687-pdzb6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90bf0bdc-6d48-4eb2-bc10-49acdc5bc676\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"olm-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"olm-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"olm-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\"},\"labels\":{\"container\":\"olm-operator\",\"endpoint\":\"https-metrics\",\"instance\":\"10.128.92.123:8443\",\"job\":\"olm-operator-metrics\",\"namespace\":\"openshift-operator-lifecycle-manager\",\"pod\":\"olm-operator-56f75d4687-pdzb6\",\"service\":\"olm-operator-metrics\"},\"scrapePool\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\",\"scrapeUrl\":\"https://10.128.92.123:8443/metrics\",\"globalUrl\":\"https://10.128.92.123:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:15.389777292Z\",\"lastScrapeDuration\":0.004625075,\"health\":\"up\"},{\"discoveredLabels\":{\"__address__\":\"10.128.56.252:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"service-ca-operator-6d88c88495-pzm78\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"service-ca-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-service-ca-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.56.252\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c7:ec:8e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.56.252\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c7:ec:8e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"service-ca-operator-6d88c88495\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.56.252\",\"__meta_kubernetes_pod_label_app\":\"service-ca-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d88c88495\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"service-ca-operator-6d88c88495-pzm78\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"870f3d5b-b205-4ac6-9b28-042e2d7859b1\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"service-ca-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-service-ca-operator/service-ca-operator/0\"},\"labels\":{\"endpoint\":\"https\",\"instance\":\"10.128.56.252:8443\",\"job\":\"metrics\",\"namespace\":\"openshift-service-ca-operator\",\"pod\":\"service-ca-operator-6d88c88495-pzm78\",\"service\":\"metrics\"},\"scrapePool\":\"serviceMonitor/openshift-service-ca-operator/service-ca-operator/0\",\"scrapeUrl\":\"https://10.128.56.252:8443/metrics\",\"globalUrl\":\"https://10.128.56.252:8443/metrics\",\"lastError\":\"\",\"lastScrape\":\"2022-10-13T10:19:22.524570882Z\",\"lastScrapeDuration\":0.028526831,\"health\":\"up\"}],\"droppedTargets\":[{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:17698\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:17698\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"check-endpoints\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver-check-endpoints\",\"__meta_kubernetes_pod_container_port_name\":\"check-endpoints\",\"__meta_kubernetes_pod_container_port_number\":\"17698\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.187:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.187\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b6:a7:e5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.120.187\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-6sffs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"54e1b44b-c540-4624-91fe-9b6f36accc2d\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.120.232:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.120.232\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:54:b1:f9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.120.232\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-kctsl\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b70404a-570b-45a6-b320-026aa5668a79\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.121.9:8443\",\"__meta_kubernetes_endpoints_name\":\"check-endpoints\",\"__meta_kubernetes_namespace\":\"openshift-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.121.9\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:d3:ca\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_desired_generation\":\"6\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"V5RdOA==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"-oe23w==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_desired_generation\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_config_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_image_import_ca_configmap\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_apiserver_trusted_ca_bundle_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"openshift-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-bfb9686df\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.121.9\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-apiserver-a\",\"__meta_kubernetes_pod_label_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"bfb9686df\",\"__meta_kubernetes_pod_label_revision\":\"2\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-bfb9686df-cwl5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a7255601-d802-4550-8209-203a55292301\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_name\":\"check-endpoints\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-apiserver/openshift-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9205\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"snapshotter-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"snapshotter-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"snapshotter-m\",\"__meta_kubernetes_pod_container_port_number\":\"9205\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9203\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"attacher-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"attacher-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"attacher-m\",\"__meta_kubernetes_pod_container_port_number\":\"9203\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9202\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"provisioner-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"provisioner-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"provisioner-m\",\"__meta_kubernetes_pod_container_port_number\":\"9202\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9204\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"resizer-m\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"resizer-kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"resizer-m\",\"__meta_kubernetes_pod_container_port_number\":\"9204\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-tq756\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"14b844c7-34f0-4e5a-a059-46585b4a8d02\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10301\",\"__meta_kubernetes_endpoints_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-csi-drivers\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"privileged\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"xI4Q_A==\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_4b51488ec5b742a098d092dfe49449df0986e\":\"true\",\"__meta_kubernetes_pod_container_name\":\"csi-driver\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"10301\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"openstack-cinder-csi-driver-controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7d849f4cf\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openstack-cinder-csi-driver-controller-7d849f4cf-qk9l7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"88ee14a3-a346-4018-9938-6104f4c112c8\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"openstack-cinder-csi-driver-controller-metrics-serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"openstack-cinder-csi-driver-controller-metrics\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"openstack-cinder-csi-driver-controller-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.52.143:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"csi-snapshot-webhook-7b969bc879-j7bqg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"csi-snapshot-webhook\",\"__meta_kubernetes_namespace\":\"openshift-cluster-storage-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.52.143\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:4c:d4:b3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.52.143\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:4c:d4:b3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"webhook\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"csi-snapshot-webhook-7b969bc879\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.52.143\",\"__meta_kubernetes_pod_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7b969bc879\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"csi-snapshot-webhook-7b969bc879-j7bqg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f72f577-7838-4bdc-a7d7-809d2c435ee8\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"csi-snapshot-webhook-secret\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"csi-snapshot-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.52.66:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"csi-snapshot-webhook-7b969bc879-tzkvg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"csi-snapshot-webhook\",\"__meta_kubernetes_namespace\":\"openshift-cluster-storage-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.52.66\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:89:98:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.52.66\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:89:98:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"webhook\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"csi-snapshot-webhook-7b969bc879\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.52.66\",\"__meta_kubernetes_pod_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7b969bc879\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"csi-snapshot-webhook-7b969bc879-tzkvg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"bc78ca4d-597c-403c-8377-9e25ec01a959\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"csi-snapshot-webhook-secret\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"csi-snapshot-webhook\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"csi-snapshot-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.53.147:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"csi-snapshot-controller-operator-547fc5c4f-f6m26\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"csi-snapshot-controller-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"csi-snapshot-controller-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-cluster-storage-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.53.147\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:60:80:5f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.53.147\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:60:80:5f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"csi-snapshot-controller-operator-547fc5c4f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.53.147\",\"__meta_kubernetes_pod_label_app\":\"csi-snapshot-controller-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"547fc5c4f\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"csi-snapshot-controller-operator-547fc5c4f-f6m26\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"25bbc5e7-e57b-4530-96a1-13d9a30fb5f2\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"csi-snapshot-controller-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"csi-snapshot-controller-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.133.246:60000\",\"__meta_kubernetes_endpoints_label_name\":\"console-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"metrics\",\"__meta_kubernetes_namespace\":\"openshift-console-operator\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.133.246\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:7b:40:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.133.246\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:7b:40:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"console-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"console-operator-7dbd68dd4b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.133.246\",\"__meta_kubernetes_pod_label_name\":\"console-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7dbd68dd4b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"console-operator-7dbd68dd4b-44sxf\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e9f337bf-a4d7-43c4-b3f1-154403484b7f\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"console-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-console-operator/console-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.114:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.126.114\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"33957bcb-47be-49a6-83ad-300d0d7ffb69\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.55:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.55\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.55\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.126.55\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f5ce003d-9392-40ac-a34e-8aa47c675f95\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.73:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-n757c\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.73\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.73\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.126.73\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-n757c\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"22ea4790-c277-42c5-879d-f80c4aaa075d\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.108:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-25bww\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.108\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.108\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.127.108\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-25bww\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"c0db5e71-94aa-4c0a-b650-7e5e3cb98e3e\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.168:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.168\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.168\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.127.168\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"31663356-b33c-43ae-a208-ed3064fcf0ee\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.52:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-hpsll\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"dns\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.52\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.52\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.127.52\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-hpsll\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ae463ca1-be02-483f-9849-3e204beb4658\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.114:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e8:52:5b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.126.114\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-wzmlj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"33957bcb-47be-49a6-83ad-300d0d7ffb69\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.55:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.55\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.55\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:2a:59\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.126.55\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-xb9vg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f5ce003d-9392-40ac-a34e-8aa47c675f95\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.126.73:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-n757c\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.73\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.126.73\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:12:b6\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.126.73\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-n757c\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"22ea4790-c277-42c5-879d-f80c4aaa075d\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.108:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-25bww\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.108\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.108\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c0:c8:76\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.127.108\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-25bww\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"c0db5e71-94aa-4c0a-b650-7e5e3cb98e3e\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.168:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.168\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.168\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c1:02:83\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.127.168\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-x6w5l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"31663356-b33c-43ae-a208-ed3064fcf0ee\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.127.52:5353\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"dns-default-hpsll\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"dns-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_endpoints_name\":\"dns-default\",\"__meta_kubernetes_namespace\":\"openshift-dns\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.52\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.127.52\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:53:cf:90\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"dns\",\"__meta_kubernetes_pod_container_port_name\":\"dns\",\"__meta_kubernetes_pod_container_port_number\":\"5353\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"dns-default\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.127.52\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6b85645b5f\",\"__meta_kubernetes_pod_label_dns_operator_openshift_io_daemonset_dns\":\"default\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_dns_operator_openshift_io_daemonset_dns\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"dns-default-hpsll\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ae463ca1-be02-483f-9849-3e204beb4658\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"dns-default-metrics-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_dns_operator_openshift_io_owning_dns\":\"default\",\"__meta_kubernetes_service_labelpresent_dns_operator_openshift_io_owning_dns\":\"true\",\"__meta_kubernetes_service_name\":\"dns-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-dns/dns-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.83.90:5000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"image-registry-5dcfbfdb49-m9mjk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"5000-tcp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_docker_registry\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_endpoints_name\":\"image-registry\",\"__meta_kubernetes_namespace\":\"openshift-image-registry\",\"__meta_kubernetes_pod_annotation_imageregistry_operator_openshift_io_dependencies_checksum\":\"sha256:c2e4379a3614d3c6245d6a72b78f2bc288bf39df517d68b7c6dd5439a409036c\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.83.90\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1e:6d:d3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.83.90\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1e:6d:d3\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_imageregistry_operator_openshift_io_dependencies_checksum\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry\",\"__meta_kubernetes_pod_container_port_number\":\"5000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"image-registry-5dcfbfdb49\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.83.90\",\"__meta_kubernetes_pod_label_docker_registry\":\"default\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5dcfbfdb49\",\"__meta_kubernetes_pod_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"image-registry-5dcfbfdb49-m9mjk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b6cdb3a-3f4f-4e5e-8e6c-5dda0d62ec22\",\"__meta_kubernetes_service_annotation_imageregistry_operator_openshift_io_checksum\":\"sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"image-registry-tls\",\"__meta_kubernetes_service_annotationpresent_imageregistry_operator_openshift_io_checksum\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_docker_registry\":\"default\",\"__meta_kubernetes_service_labelpresent_docker_registry\":\"true\",\"__meta_kubernetes_service_name\":\"image-registry\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-image-registry/image-registry-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.83.151:60000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"image-registry-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"image-registry-operator\",\"__meta_kubernetes_namespace\":\"openshift-image-registry\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.83.151\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ca:de:36\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.83.151\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ca:de:36\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-image-registry-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-image-registry-operator-6cfc44cd58\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.83.151\",\"__meta_kubernetes_pod_label_name\":\"cluster-image-registry-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6cfc44cd58\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-image-registry-operator-6cfc44cd58-xdwtw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6f65971b-96c4-4cbd-9b8f-df3a6984fed3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"image-registry-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"image-registry-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"image-registry-operator\",\"__metrics_path__\":\"/extensions/v2/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-image-registry/image-registry/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7a994a2f-c4ec-4a4c-b4ae-b9ef7f93bb00\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"74040c8a-de64-4dff-943f-8e9a926a790e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:80\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"80\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-qcfbg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7a994a2f-c4ec-4a4c-b4ae-b9ef7f93bb00\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:80\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_endpoints_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_endpoints_name\":\"router-internal-default\",\"__meta_kubernetes_namespace\":\"openshift-ingress\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"hostnetwork\",\"__meta_kubernetes_pod_annotation_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"10\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_unsupported_do_not_use_openshift_io_override_liveness_grace_period_seconds\":\"true\",\"__meta_kubernetes_pod_container_name\":\"router\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"80\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"router-default-697ff75b79\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"default\",\"__meta_kubernetes_pod_label_ingresscontroller_operator_openshift_io_hash\":\"56dd8c545c\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"697ff75b79\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_deployment_ingresscontroller\":\"true\",\"__meta_kubernetes_pod_labelpresent_ingresscontroller_operator_openshift_io_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"router-default-697ff75b79-t6b78\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"74040c8a-de64-4dff-943f-8e9a926a790e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"router-metrics-certs-default\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"default\",\"__meta_kubernetes_service_labelpresent_ingresscontroller_operator_openshift_io_owning_ingresscontroller\":\"true\",\"__meta_kubernetes_service_name\":\"router-internal-default\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-ingress/router-default/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10357\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"30cc4fad-2707-49ca-8af4-654dfe7049f2\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"30cc4fad-2707-49ca-8af4-654dfe7049f2\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:01.957733716Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-policy-controller\",\"__meta_kubernetes_pod_container_port_number\":\"10357\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"9fe004e7-c0d0-4b1a-bc98-e115973fe308\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10357\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"4d079c6f-40c7-4c4b-9915-95bfdc4d90bf\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"4d079c6f-40c7-4c4b-9915-95bfdc4d90bf\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:27:50.144170849Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-policy-controller\",\"__meta_kubernetes_pod_container_port_number\":\"10357\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"dafaafdf-d6ab-43af-a3b8-182083a9c825\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10357\",\"__meta_kubernetes_endpoints_name\":\"kube-controller-manager\",\"__meta_kubernetes_namespace\":\"openshift-kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_logs_container\":\"kube-controller-manager\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"8673eaec-7022-428b-9556-52d3f1ba194f\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"8673eaec-7022-428b-9556-52d3f1ba194f\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:15.460702568Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_logs_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-policy-controller\",\"__meta_kubernetes_pod_container_port_number\":\"10357\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"kube-controller-manager\",\"__meta_kubernetes_pod_label_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_label_revision\":\"14\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_kube_controller_manager\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-controller-manager-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e6e98f52-d119-440e-88f0-02ce9237fa4d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"kube-controller-manager\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9654\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-controller-7654df4d98-f2qvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-controller\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9654\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-controller-7654df4d98\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kuryr-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7654df4d98\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-controller-7654df4d98-f2qvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2543a36c-08af-4a31-9ae6-f0cb7c99a745\",\"__meta_kubernetes_service_label_app\":\"kuryr-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-cjcgk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-cjcgk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"bbdf1c26-e361-4015-9404-a307c40d0734\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-xzbzv\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-xzbzv\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"false\",\"__meta_kubernetes_pod_uid\":\"9a46eb61-8782-4c26-9e89-8fef6e4a33e9\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-crfvc\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-crfvc\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"false\",\"__meta_kubernetes_pod_uid\":\"de39c947-6203-413a-aa51-b069776af721\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-2rrvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-2rrvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e6e1bace-f2ff-419b-9206-323d49ce67ec\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-ndzt5\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-ndzt5\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5497497a-dd9f-464c-a031-1af7c8a3123c\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9655\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kuryr-cni-t448w\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"kuryr-cni\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kuryr-cni\",\"__meta_kubernetes_namespace\":\"openshift-kuryr\",\"__meta_kubernetes_pod_container_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9655\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"kuryr-cni\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"kuryr-cni\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_configuration_hash\":\"9f007a0d89c9ecbec4bde2cb663b452a\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6747cc7655\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_configuration_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"kuryr-cni-t448w\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"568d2b5d-b1f3-4810-8ef5-058a27e6266a\",\"__meta_kubernetes_service_label_app\":\"kuryr-cni\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"kuryr-cni\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.42:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.42\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fd:e8:1a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-operator-74b9f87587\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.42\",\"__meta_kubernetes_pod_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"74b9f87587\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-operator-74b9f87587-s6jf2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90b05b44-49bd-4179-af1a-b1ffb84bf9e4\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-controllers/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:9192\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"9192\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.45.39:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.45.39\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2a:27:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-autoscaler-operator-774b846b57\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.45.39\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"774b846b57\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-autoscaler-operator-774b846b57-hdvlz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"7b5bd097-8bf7-4562-96fb-1796ba078ad7\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-autoscaler-operator-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-autoscaler-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-autoscaler-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-operator-webhook\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"machine-api-operator-webhook-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"machine-api-operator-webhook\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-operator-webhook\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-webhook-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-webhook-server-cert\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-webhook-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8442\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machineset-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machineset-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8441\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"machine-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-machine-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"machine-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8444\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-mhc-mtrc\",\"__meta_kubernetes_pod_container_port_name\":\"mhc-mtrc\",\"__meta_kubernetes_pod_container_port_number\":\"8444\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:8443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9441\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machineset-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9441\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9440\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9440\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.154:9442\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"controller\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"machine-api-controllers\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:38:00:41\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"WAXdSw==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_machine_api_mao_trusted_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"machine-healthcheck-controller\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"9442\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"machine-api-controllers-5d5dd7564c\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.44.154\",\"__meta_kubernetes_pod_label_api\":\"clusterapi\",\"__meta_kubernetes_pod_label_k8s_app\":\"controller\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5d5dd7564c\",\"__meta_kubernetes_pod_labelpresent_api\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"machine-api-controllers-5d5dd7564c-ght8d\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fc5bb394-9cc5-4da8-9d38-50b1cbcbaa53\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"machine-api-controllers-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"controller\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"machine-api-controllers\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.44.18:9443\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-baremetal-operator-service\",\"__meta_kubernetes_namespace\":\"openshift-machine-api\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.44.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a2:18:2b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_container_port_name\":\"webhook-server\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-baremetal-operator-7c54dfc55f\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.44.18\",\"__meta_kubernetes_pod_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c54dfc55f\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-baremetal-operator-7c54dfc55f-kdmgn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"848a361e-31d0-4ee3-87f2-362c668a3ea3\",\"__meta_kubernetes_service_annotation_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"cluster-baremetal-operator-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_exclude_release_openshift_io_internal_openshift_hosted\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"cluster-baremetal-operator\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-baremetal-operator-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-machine-api/machine-api-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.100:50051\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"community-operators-6xhq7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_olm_service_spec_hash\":\"79986496d9\",\"__meta_kubernetes_endpoints_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_endpoints_name\":\"community-operators\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.100\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:95:36:6d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.100\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:95:36:6d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operatorframework_io_managed_by\":\"marketplace-operator\",\"__meta_kubernetes_pod_annotation_operatorframework_io_priorityclass\":\"system-cluster-critical\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_managed_by\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_priorityclass\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry-server\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"50051\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.79.100\",\"__meta_kubernetes_pod_label_olm_catalogSource\":\"community-operators\",\"__meta_kubernetes_pod_label_olm_pod_spec_hash\":\"584cc5d5c6\",\"__meta_kubernetes_pod_labelpresent_catalogsource_operators_coreos_com_update\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_catalogSource\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_pod_spec_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"community-operators-6xhq7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"1d5463c2-ae3f-4ae2-b8c2-461fcf8304f6\",\"__meta_kubernetes_service_label_olm_service_spec_hash\":\"79986496d9\",\"__meta_kubernetes_service_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_service_name\":\"community-operators\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.113:50051\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"redhat-operators-7vq7x\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_olm_service_spec_hash\":\"f6ff9c676\",\"__meta_kubernetes_endpoints_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_endpoints_name\":\"redhat-operators\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.113\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:46:75:5f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.113\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:46:75:5f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operatorframework_io_managed_by\":\"marketplace-operator\",\"__meta_kubernetes_pod_annotation_operatorframework_io_priorityclass\":\"system-cluster-critical\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_managed_by\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_priorityclass\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry-server\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"50051\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.79.113\",\"__meta_kubernetes_pod_label_olm_catalogSource\":\"redhat-operators\",\"__meta_kubernetes_pod_label_olm_pod_spec_hash\":\"7745cfd586\",\"__meta_kubernetes_pod_labelpresent_catalogsource_operators_coreos_com_update\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_catalogSource\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_pod_spec_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"redhat-operators-7vq7x\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"489c18ef-1d31-4d13-8856-0137e3d5ee19\",\"__meta_kubernetes_service_label_olm_service_spec_hash\":\"f6ff9c676\",\"__meta_kubernetes_service_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_service_name\":\"redhat-operators\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.88:50051\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"certified-operators-g5v7x\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_olm_service_spec_hash\":\"676574974f\",\"__meta_kubernetes_endpoints_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_endpoints_name\":\"certified-operators\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.88\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:cc:69:e1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.88\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:cc:69:e1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operatorframework_io_managed_by\":\"marketplace-operator\",\"__meta_kubernetes_pod_annotation_operatorframework_io_priorityclass\":\"system-cluster-critical\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_managed_by\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_priorityclass\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry-server\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"50051\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.79.88\",\"__meta_kubernetes_pod_label_olm_catalogSource\":\"certified-operators\",\"__meta_kubernetes_pod_label_olm_pod_spec_hash\":\"78dcddd844\",\"__meta_kubernetes_pod_labelpresent_catalogsource_operators_coreos_com_update\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_catalogSource\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_pod_spec_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"certified-operators-g5v7x\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"fcb29ab8-aa9d-4fd8-b085-ce0098072c59\",\"__meta_kubernetes_service_label_olm_service_spec_hash\":\"676574974f\",\"__meta_kubernetes_service_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_service_name\":\"certified-operators\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.141:8383\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_name\":\"marketplace-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"marketplace-operator-79fb778f6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.79.141\",\"__meta_kubernetes_pod_label_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79fb778f6b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b3bba0b4-92e7-461f-abff-61fc1b5cd349\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"marketplace-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"marketplace-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.141:60000\",\"__meta_kubernetes_endpoints_label_name\":\"marketplace-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"60000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"marketplace-operator-79fb778f6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.79.141\",\"__meta_kubernetes_pod_label_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79fb778f6b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b3bba0b4-92e7-461f-abff-61fc1b5cd349\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"marketplace-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"marketplace-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.79.141:8080\",\"__meta_kubernetes_endpoints_label_name\":\"marketplace-operator\",\"__meta_kubernetes_endpoints_labelpresent_name\":\"true\",\"__meta_kubernetes_endpoints_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.79.141\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:e9:71:3f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_container_port_name\":\"healthz\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"marketplace-operator-79fb778f6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.79.141\",\"__meta_kubernetes_pod_label_name\":\"marketplace-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79fb778f6b\",\"__meta_kubernetes_pod_labelpresent_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"marketplace-operator-79fb778f6b-qc8zr\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b3bba0b4-92e7-461f-abff-61fc1b5cd349\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"marketplace-operator-metrics\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_name\":\"marketplace-operator\",\"__meta_kubernetes_service_labelpresent_name\":\"true\",\"__meta_kubernetes_service_name\":\"marketplace-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.78.179:50051\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"redhat-marketplace-hhmpc\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_olm_service_spec_hash\":\"fc99d9bdb\",\"__meta_kubernetes_endpoints_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_endpoints_name\":\"redhat-marketplace\",\"__meta_kubernetes_namespace\":\"openshift-marketplace\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.78.179\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:27:89:d2\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.78.179\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:27:89:d2\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotation_operatorframework_io_managed_by\":\"marketplace-operator\",\"__meta_kubernetes_pod_annotation_operatorframework_io_priorityclass\":\"system-cluster-critical\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_managed_by\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operatorframework_io_priorityclass\":\"true\",\"__meta_kubernetes_pod_container_name\":\"registry-server\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"50051\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.78.179\",\"__meta_kubernetes_pod_label_olm_catalogSource\":\"redhat-marketplace\",\"__meta_kubernetes_pod_label_olm_pod_spec_hash\":\"fbf4dd465\",\"__meta_kubernetes_pod_labelpresent_olm_catalogSource\":\"true\",\"__meta_kubernetes_pod_labelpresent_olm_pod_spec_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"redhat-marketplace-hhmpc\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"3f0ff733-7469-4eb7-9a01-55c45eca0afe\",\"__meta_kubernetes_service_label_olm_service_spec_hash\":\"fc99d9bdb\",\"__meta_kubernetes_service_labelpresent_olm_service_spec_hash\":\"true\",\"__meta_kubernetes_service_name\":\"redhat-marketplace\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-marketplace/marketplace-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/alertmanager/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:2379\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"etcd\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:28:22.756939605Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"742f6dc2-47a0-41cc-b0a9-13e66d83f057\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:2379\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"etcd\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:56.640481859Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6891d70c-a3ec-4d90-b283-d4abf49382d3\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:2379\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"etcd\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:36.245067150Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"49572518-4248-4dc2-8392-e8298ad9706c\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9980\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"e93738df-a38e-4121-9c4e-ab9deca1d4be\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:28:22.756939605Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"etcd-readyz\",\"__meta_kubernetes_pod_container_port_name\":\"readyz\",\"__meta_kubernetes_pod_container_port_number\":\"9980\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"742f6dc2-47a0-41cc-b0a9-13e66d83f057\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9980\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"aa353535-1010-4ffa-99b6-da582e780536\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:26:56.640481859Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"etcd-readyz\",\"__meta_kubernetes_pod_container_port_name\":\"readyz\",\"__meta_kubernetes_pod_container_port_number\":\"9980\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6891d70c-a3ec-4d90-b283-d4abf49382d3\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9980\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"etcd\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"etcd\",\"__meta_kubernetes_namespace\":\"openshift-etcd\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_hash\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_mirror\":\"515275cf-9496-4dc0-b86e-2712e99c18e7\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_seen\":\"2022-10-11T16:29:36.245067150Z\",\"__meta_kubernetes_pod_annotation_kubernetes_io_config_source\":\"file\",\"__meta_kubernetes_pod_annotation_target_workload_openshift_io_management\":\"{\\\"effect\\\": \\\"PreferredDuringScheduling\\\"}\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_mirror\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_seen\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_source\":\"true\",\"__meta_kubernetes_pod_annotationpresent_target_workload_openshift_io_management\":\"true\",\"__meta_kubernetes_pod_container_name\":\"etcd-readyz\",\"__meta_kubernetes_pod_container_port_name\":\"readyz\",\"__meta_kubernetes_pod_container_port_number\":\"9980\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"Node\",\"__meta_kubernetes_pod_controller_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app\":\"etcd\",\"__meta_kubernetes_pod_label_etcd\":\"true\",\"__meta_kubernetes_pod_label_k8s_app\":\"etcd\",\"__meta_kubernetes_pod_label_revision\":\"6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_etcd\":\"true\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"etcd-ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"49572518-4248-4dc2-8392-e8298ad9706c\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"etcd\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"etcd\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/etcd/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/grafana/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kube-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/cadvisor\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics/probes\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/2\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:10255\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"http-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:4194\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Node\",\"__meta_kubernetes_endpoint_address_target_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"cadvisor\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kubelet\",\"__meta_kubernetes_namespace\":\"kube-system\",\"__meta_kubernetes_service_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kubelet\",\"__meta_kubernetes_service_label_k8s_app\":\"kubelet\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"kubelet\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"http\",\"job\":\"serviceMonitor/openshift-monitoring/kubelet/3\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/node-exporter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/openshift-state-metrics/1\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-adapter/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-k8s/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/prometheus-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/telemeter-client/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"thanos-proxy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-querier/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9093\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy-rules\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-rules\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy-rules\",\"__meta_kubernetes_pod_container_port_number\":\"9093\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.114:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.114\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:64:00:9b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.114\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-cvbzq\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"95c88db1-e599-4351-8604-3655d9250791\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.183:9090\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"thanos-querier\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.183\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c3:a9:de\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-query\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"9090\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"thanos-querier-6699db6d95\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.183\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6699db6d95\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"thanos-querier-6699db6d95-42mpw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6987d5e8-4a23-49ad-ab57-6240ef3c4bd7\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"thanos-querier-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"query-layer\",\"__meta_kubernetes_service_label_app_kubernetes_io_instance\":\"thanos-querier\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"thanos-query\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"thanos-querier\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.105:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-p5vmg\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b8ff8622-729e-4729-a7e7-8697864e6d5a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.0.199:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7cn6l\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6abaa413-0438-48a2-add5-04718c115244\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.169:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-fvjvs\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"958a88c3-9530-40ea-93bc-364e7b008d04\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.2.72:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-7n85z\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"e520f6ac-f247-4e36-a129-d0b4f724c1a3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.178:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-dlzvz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"053a3770-cf8f-4156-bd99-3d8ad58a3f16\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.196.3.187:9100\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"node-exporter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"9100\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"node-exporter\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7f9b7bd8b5\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"node-exporter-g96tz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"238be02b-d34b-4005-94a3-e900dadfb56b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"node-exporter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"node-exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"1.1.2\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"node-exporter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.89:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"openshift-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.89\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:88:c2:40\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"openshift-state-metrics-c59c784c4\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.89\",\"__meta_kubernetes_pod_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"c59c784c4\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"openshift-state-metrics-c59c784c4-f5f7v\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f3277e62-2a87-4978-8163-8b1023dc4f80\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"openshift-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"openshift-state-metrics\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"openshift-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.239:8080\",\"__meta_kubernetes_endpoints_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_endpoints_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"telemeter-client\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.239\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:7a:87\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"telemeter-client\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"telemeter-client-6d8969b4bf\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.239\",\"__meta_kubernetes_pod_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"6d8969b4bf\",\"__meta_kubernetes_pod_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"telemeter-client-6d8969b4bf-dffrt\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4910b4f1-5eb2-45e5-9d80-09f1aed4537c\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"telemeter-client-tls\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_k8s_app\":\"telemeter-client\",\"__meta_kubernetes_service_labelpresent_k8s_app\":\"true\",\"__meta_kubernetes_service_name\":\"telemeter-client\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoints_label_alertmanager\":\"main\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_endpoints_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-main\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-udp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"alertmanager-main-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_alertmanager\":\"main\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_service_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-main\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.49:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"cluster-monitoring-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.49\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:b3:60\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"cluster-monitoring-operator-79d65bfd5b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.23.49\",\"__meta_kubernetes_pod_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"79d65bfd5b\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"cluster-monitoring-operator-79d65bfd5b-pntd6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"83ae671b-d09b-4541-b74f-673d9bbdf563\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"cluster-monitoring-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"cluster-monitoring-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"cluster-monitoring-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.177:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operator\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.177\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:1a:10:dc\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus-operator\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-operator-7bcc4bcc6b\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.22.177\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7bcc4bcc6b\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-operator-7bcc4bcc6b-zlbgw\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"4a35c240-ec54-45e3-b1a8-5efe98a87928\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-operator-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"controller\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-operator\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.49.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operator\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9095\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9095\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"udp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"UDP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"UDP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-2\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9094\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_hostname\":\"alertmanager-main-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"tcp-mesh\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"alertmanager\",\"__meta_kubernetes_pod_container_port_name\":\"mesh-tcp\",\"__meta_kubernetes_pod_container_port_number\":\"9094\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.112:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.112\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ac:eb:00\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.112\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"02c4ad64-a941-442b-9c8b-620db031f91a\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.138:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.138\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:01:ce\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.138\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-2\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5be3b096-5513-4dec-92ac-ea79e3e74e38\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.161:9092\",\"__meta_kubernetes_endpoints_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"alertmanager-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.161\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:67:65:2e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"alertmanager\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"alertmanager-main\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.161\",\"__meta_kubernetes_pod_label_alertmanager\":\"main\",\"__meta_kubernetes_pod_label_app\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"alert-router\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"main\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"alertmanager\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.22.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"alertmanager-main-78c6b7cbfb\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_labelpresent_alertmanager\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"alertmanager-main-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"0ba17a85-c575-4eef-ac90-9d8610a62ff3\",\"__meta_kubernetes_service_label_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_alertmanager\":\"true\",\"__meta_kubernetes_service_name\":\"alertmanager-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-main\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-main\",\"__meta_kubernetes_pod_container_port_name\":\"https-main\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.45:9443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https-self\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"kube-state-metrics\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:68:1e:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"kube-state-metrics\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-self\",\"__meta_kubernetes_pod_container_port_name\":\"https-self\",\"__meta_kubernetes_pod_container_port_number\":\"9443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"kube-state-metrics-754df74859\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.45\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"754df74859\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"kube-state-metrics-754df74859-w8k5h\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"cb715a58-6c73-45b7-ad0e-f96ecd04c1e5\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"kube-state-metrics-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"exporter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"kube-state-metrics\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.0.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"kube-state-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"tenancy\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.77:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.77\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2f:75:3e\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.23.77\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-blrxn\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2f70ccee-4ec5-4082-bc22-22487e4f5ab9\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.82:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-adapter\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.82\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:aa:12:f1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_container_port_number\":\"6443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-adapter-86cfd468f7\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.82\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"86cfd468f7\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-adapter-86cfd468f7-qbb4b\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5d160ed9-a15a-44c3-b06d-a183f82d6629\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-adapter-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"metrics-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus-adapter\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"0.9.0\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-adapter\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"web\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-0\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_hostname\":\"prometheus-k8s-1\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"grpc\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10902\",\"__meta_kubernetes_endpoints_label_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-operated\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy-thanos\",\"__meta_kubernetes_pod_container_port_name\":\"thanos-proxy\",\"__meta_kubernetes_pod_container_port_number\":\"10902\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_label_operated_prometheus\":\"true\",\"__meta_kubernetes_service_labelpresent_operated_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-operated\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.18:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.18\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ff:39:16\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.23.18\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-0\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"57e33cf7-4412-4bfe-b728-d95159125d5b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:10901\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"thanos-sidecar\",\"__meta_kubernetes_pod_container_port_name\":\"grpc\",\"__meta_kubernetes_pod_container_port_number\":\"10901\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9091\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"prometheus-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"web\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.23.35:9092\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_endpoints_label_prometheus\":\"k8s\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"prometheus-k8s-thanos-sidecar\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.23.35\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:94:4b:ef\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_kubectl_kubernetes_io_default_container\":\"prometheus\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"nonroot\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_kubectl_kubernetes_io_default_container\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"tenancy\",\"__meta_kubernetes_pod_container_port_number\":\"9092\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"StatefulSet\",\"__meta_kubernetes_pod_controller_name\":\"prometheus-k8s\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.23.35\",\"__meta_kubernetes_pod_label_app\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_instance\":\"k8s\",\"__meta_kubernetes_pod_label_app_kubernetes_io_managed_by\":\"prometheus-operator\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"prometheus-k8s-77f9b66476\",\"__meta_kubernetes_pod_label_operator_prometheus_io_name\":\"k8s\",\"__meta_kubernetes_pod_label_operator_prometheus_io_shard\":\"0\",\"__meta_kubernetes_pod_label_prometheus\":\"k8s\",\"__meta_kubernetes_pod_label_statefulset_kubernetes_io_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_instance\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_managed_by\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_operator_prometheus_io_shard\":\"true\",\"__meta_kubernetes_pod_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_pod_labelpresent_statefulset_kubernetes_io_pod_name\":\"true\",\"__meta_kubernetes_pod_name\":\"prometheus-k8s-1\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"prometheus-k8s-thanos-sidecar-tls\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"thanos-sidecar\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"prometheus\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"2.29.2\",\"__meta_kubernetes_service_label_prometheus\":\"k8s\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_labelpresent_prometheus\":\"true\",\"__meta_kubernetes_service_name\":\"prometheus-k8s-thanos-sidecar\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3000\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"3000\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.22.230:3001\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_endpoints_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_endpoints_name\":\"grafana\",\"__meta_kubernetes_namespace\":\"openshift-monitoring\",\"__meta_kubernetes_pod_annotation_checksum_grafana_config\":\"bcf6fd722b2c76f194401f4b8e20d0af\",\"__meta_kubernetes_pod_annotation_checksum_grafana_datasources\":\"ae625c50302c7e8068dc3600dbd686cc\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.22.230\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d1:2a:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_config\":\"true\",\"__meta_kubernetes_pod_annotationpresent_checksum_grafana_datasources\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"grafana\",\"__meta_kubernetes_pod_container_port_name\":\"http\",\"__meta_kubernetes_pod_container_port_number\":\"3001\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"grafana-7c5c5fb5b6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.22.230\",\"__meta_kubernetes_pod_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_pod_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_pod_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c5c5fb5b6\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_pod_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"grafana-7c5c5fb5b6-cht4p\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"59162dd9-267d-4146-bca6-ddbdc3930d01\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_secret_name\":\"grafana-tls\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_label_app_kubernetes_io_component\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_name\":\"grafana\",\"__meta_kubernetes_service_label_app_kubernetes_io_part_of\":\"openshift-monitoring\",\"__meta_kubernetes_service_label_app_kubernetes_io_version\":\"7.5.11\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_component\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_name\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_part_of\":\"true\",\"__meta_kubernetes_service_labelpresent_app_kubernetes_io_version\":\"true\",\"__meta_kubernetes_service_name\":\"grafana\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-monitoring/thanos-sidecar/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.135:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-mmmtp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.135\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:0f:7c:01\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.135\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:0f:7c:01\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.135\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-mmmtp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"3e837b28-47f3-449c-a549-2f35716eadac\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.247:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-rwwwz\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.247\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ad:57:02\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.247\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:ad:57:02\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.34.247\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-rwwwz\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5cc84773-7d05-45e6-9e0e-c1d785d19d6f\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.62:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-98jr8\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.62\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:4d:80:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.62\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:4d:80:fb\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.62\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-98jr8\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"b9e25138-56b7-4086-b0d8-bbfad8d59d29\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.92:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-xh8kk\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.92\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:94:47\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.92\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:d9:94:47\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.92\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-xh8kk\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"78e54083-207a-4a1d-9ac3-1e61e4c3a94d\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.35.157:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-9vnl8\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.35.157\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:80:04:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.35.157\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:80:04:9f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.35.157\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-9vnl8\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"eab7a941-acc9-4f7a-9e27-bfda6efdc8b7\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.35.46:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-metrics-daemon-6p764\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_service\":\"network-metrics-service\",\"__meta_kubernetes_endpoints_labelpresent_service\":\"true\",\"__meta_kubernetes_endpoints_labelpresent_service_kubernetes_io_headless\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-metrics-service\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.35.46\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:21:c6:58\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.35.46\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:21:c6:58\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.35.46\",\"__meta_kubernetes_pod_label_app\":\"network-metrics-daemon\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"7c58ffc674\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"network-metrics-daemon-6p764\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f1a5dd1f-c96d-435e-a2c2-414ef30007b0\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"metrics-daemon-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_service\":\"network-metrics-service\",\"__meta_kubernetes_service_labelpresent_service\":\"true\",\"__meta_kubernetes_service_name\":\"network-metrics-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-multus-admission-controller/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"kube-rbac-proxy\",\"__meta_kubernetes_pod_container_port_name\":\"https\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:6443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"webhook\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.19:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.19\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:c5:dc:1c\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.34.19\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-flt6k\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"5ba1f56d-f201-4e1c-aba7-538854342b42\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.23:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.23\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:69:02:6b\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.34.23\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-xj8rp\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"6d6558a3-fad6-4bdc-a090-1717f9129304\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.34.59:9091\",\"__meta_kubernetes_endpoints_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"multus-admission-controller\",\"__meta_kubernetes_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.34.59\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:f5:ff:1f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_container_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_container_port_name\":\"metrics-port\",\"__meta_kubernetes_pod_container_port_number\":\"9091\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"multus-admission-controller\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.34.59\",\"__meta_kubernetes_pod_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_pod_label_component\":\"network\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"6874c84874\",\"__meta_kubernetes_pod_label_namespace\":\"openshift-multus\",\"__meta_kubernetes_pod_label_openshift_io_component\":\"network\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_label_type\":\"infra\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_namespace\":\"true\",\"__meta_kubernetes_pod_labelpresent_openshift_io_component\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_labelpresent_type\":\"true\",\"__meta_kubernetes_pod_name\":\"multus-admission-controller-pprg6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8f0677ca-7cfa-475d-b538-287baeaf960b\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"multus-admission-controller-secret\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"multus-admission-controller\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"multus-admission-controller\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-multus/monitor-network/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.102.146:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-59lq9\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.102.146\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2e:58:58\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.102.146\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:2e:58:58\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.72\",\"__meta_kubernetes_pod_ip\":\"10.128.102.146\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-59lq9\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-8kq82\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"d1a98bea-e210-44c4-a570-c9b3e3b0c15b\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.102.87:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-b6qcb\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.102.87\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b8:a1:d9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.102.87\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:b8:a1:d9\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.102.87\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-b6qcb\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"a0d01f62-8fc4-461d-9bb0-508100b31c66\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.103.135:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-8pbt4\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.135\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:af:35:5d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.135\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:af:35:5d\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.2.169\",\"__meta_kubernetes_pod_ip\":\"10.128.103.135\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-8pbt4\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-94fxs\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"dfe74f2a-da84-4b8a-b5ae-85624567baca\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.103.154:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-x7ncv\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:01:ae:14\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.154\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:01:ae:14\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.103.154\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-x7ncv\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"2b7a96a9-c1a8-4940-adaa-942043648bad\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.103.215:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-k2dkh\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.215\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fc:d5:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.215\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:fc:d5:0a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.103.215\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-k2dkh\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"9be059a1-72fb-40df-a638-65738e955f58\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.103.253:8080\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"network-check-target-675xj\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"network-check-target\",\"__meta_kubernetes_namespace\":\"openshift-network-diagnostics\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.253\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:3e:12:ac\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.103.253\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:3e:12:ac\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"restricted\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"network-check-target-container\",\"__meta_kubernetes_pod_container_port_number\":\"8080\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"DaemonSet\",\"__meta_kubernetes_pod_controller_name\":\"network-check-target\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.199\",\"__meta_kubernetes_pod_ip\":\"10.128.103.253\",\"__meta_kubernetes_pod_label_app\":\"network-check-target\",\"__meta_kubernetes_pod_label_controller_revision_hash\":\"69576c5c48\",\"__meta_kubernetes_pod_label_kubernetes_io_os\":\"linux\",\"__meta_kubernetes_pod_label_pod_template_generation\":\"1\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_controller_revision_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_kubernetes_io_os\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_generation\":\"true\",\"__meta_kubernetes_pod_name\":\"network-check-target-675xj\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-worker-0-j4pkp\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"8936e92f-a7cf-4889-95a8-6c5a667d658b\",\"__meta_kubernetes_service_name\":\"network-check-target\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-network-diagnostics/network-check-source/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.118.209:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-794b9fc494-m9zm9\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.118.209\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:e2:d2\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.118.209\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:5b:e2:d2\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-794b9fc494\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.187\",\"__meta_kubernetes_pod_ip\":\"10.128.118.209\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_label_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"794b9fc494\",\"__meta_kubernetes_pod_label_revision\":\"1\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-794b9fc494-m9zm9\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-2\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"3aa58f46-924e-4cd2-9aea-09be52dd9703\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-oauth-apiserver/openshift-oauth-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.119.144:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-794b9fc494-bwqm7\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.119.144\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:8d:f7:ff\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.119.144\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:8d:f7:ff\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-794b9fc494\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.119.144\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_label_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"794b9fc494\",\"__meta_kubernetes_pod_label_revision\":\"1\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-794b9fc494-bwqm7\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"eee0ef71-5e00-42cc-9f3f-5751f435891d\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-oauth-apiserver/openshift-oauth-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.119.66:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"apiserver-794b9fc494-mh5mh\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"api\",\"__meta_kubernetes_namespace\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.119.66\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a5:82:c1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.119.66\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:a5:82:c1\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"node-exporter\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"wI3Hzg==\",\"__meta_kubernetes_pod_annotation_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"pfKASQ==\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_client_secret\":\"true\",\"__meta_kubernetes_pod_annotationpresent_operator_openshift_io_dep_openshift_oauth_apiserver_etcd_serving_ca_configmap\":\"true\",\"__meta_kubernetes_pod_container_name\":\"oauth-apiserver\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"apiserver-794b9fc494\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.119.66\",\"__meta_kubernetes_pod_label_apiserver\":\"true\",\"__meta_kubernetes_pod_label_app\":\"openshift-oauth-apiserver\",\"__meta_kubernetes_pod_label_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"794b9fc494\",\"__meta_kubernetes_pod_label_revision\":\"1\",\"__meta_kubernetes_pod_labelpresent_apiserver\":\"true\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_oauth_apiserver_anti_affinity\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_labelpresent_revision\":\"true\",\"__meta_kubernetes_pod_name\":\"apiserver-794b9fc494-mh5mh\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"ae380f69-28f7-4135-a239-268c9862de08\",\"__meta_kubernetes_service_annotation_prometheus_io_scheme\":\"https\",\"__meta_kubernetes_service_annotation_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scheme\":\"true\",\"__meta_kubernetes_service_annotationpresent_prometheus_io_scrape\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_name\":\"api\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-oauth-apiserver/openshift-oauth-apiserver/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.45:5443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"packageserver-5fb6859686-2g8hx\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"5443\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"packageserver-service\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:7a:43:e0\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:7a:43:e0\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_olm_operatorGroup\":\"olm-operators\",\"__meta_kubernetes_pod_annotation_olm_operatorNamespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olm_targetNamespaces\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olmcahash\":\"22e857e11f8fc8545f7b19e7b40f09deb38dbd5b268e26b89e90246b791afe7b\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorGroup\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorNamespace\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_targetNamespaces\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olmcahash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"packageserver\",\"__meta_kubernetes_pod_container_port_number\":\"5443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"packageserver-5fb6859686\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.93.45\",\"__meta_kubernetes_pod_label_app\":\"packageserver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5fb6859686\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"packageserver-5fb6859686-2g8hx\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f503b711-ed84-447c-ae2d-d9f748184e79\",\"__meta_kubernetes_service_name\":\"packageserver-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.91:5443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"packageserver-5fb6859686-lcrw6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"5443\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"packageserver-service\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.91\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:75:24:01\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.91\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:75:24:01\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_olm_operatorGroup\":\"olm-operators\",\"__meta_kubernetes_pod_annotation_olm_operatorNamespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olm_targetNamespaces\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olmcahash\":\"22e857e11f8fc8545f7b19e7b40f09deb38dbd5b268e26b89e90246b791afe7b\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorGroup\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorNamespace\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_targetNamespaces\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olmcahash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"packageserver\",\"__meta_kubernetes_pod_container_port_number\":\"5443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"packageserver-5fb6859686\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.93.91\",\"__meta_kubernetes_pod_label_app\":\"packageserver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5fb6859686\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"packageserver-5fb6859686-lcrw6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"d594741c-595c-4b03-861d-b7f1ea727aeb\",\"__meta_kubernetes_service_name\":\"packageserver-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.92.123:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"olm-operator-56f75d4687-pdzb6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"olm-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"olm-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.92.123\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:08:05:71\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.92.123\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:08:05:71\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"olm-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"olm-operator-56f75d4687\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.92.123\",\"__meta_kubernetes_pod_label_app\":\"olm-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"56f75d4687\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"olm-operator-56f75d4687-pdzb6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"90bf0bdc-6d48-4eb2-bc10-49acdc5bc676\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"olm-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"olm-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"olm-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.117:8443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"catalog-operator-7c7d96d8d6-bfvts\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"https-metrics\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_label_app\":\"catalog-operator\",\"__meta_kubernetes_endpoints_labelpresent_app\":\"true\",\"__meta_kubernetes_endpoints_name\":\"catalog-operator-metrics\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.117\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:8b:73\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.117\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:29:8b:73\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"catalog-operator\",\"__meta_kubernetes_pod_container_port_name\":\"metrics\",\"__meta_kubernetes_pod_container_port_number\":\"8443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"catalog-operator-7c7d96d8d6\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.93.117\",\"__meta_kubernetes_pod_label_app\":\"catalog-operator\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"7c7d96d8d6\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"catalog-operator-7c7d96d8d6-bfvts\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"245bde86-6823-4aaf-9b27-aaad0428d6f6\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_secret_name\":\"catalog-operator-serving-cert\",\"__meta_kubernetes_service_annotation_service_alpha_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotation_service_beta_openshift_io_serving_cert_signed_by\":\"openshift-service-serving-signer@1665504848\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_service_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_secret_name\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_alpha_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_annotationpresent_service_beta_openshift_io_serving_cert_signed_by\":\"true\",\"__meta_kubernetes_service_label_app\":\"catalog-operator\",\"__meta_kubernetes_service_labelpresent_app\":\"true\",\"__meta_kubernetes_service_name\":\"catalog-operator-metrics\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.45:5443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"packageserver-5fb6859686-2g8hx\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_endpoint_port_name\":\"5443\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"packageserver-service\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:7a:43:e0\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.45\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:7a:43:e0\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_olm_operatorGroup\":\"olm-operators\",\"__meta_kubernetes_pod_annotation_olm_operatorNamespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olm_targetNamespaces\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olmcahash\":\"22e857e11f8fc8545f7b19e7b40f09deb38dbd5b268e26b89e90246b791afe7b\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorGroup\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorNamespace\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_targetNamespaces\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olmcahash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"packageserver\",\"__meta_kubernetes_pod_container_port_number\":\"5443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"packageserver-5fb6859686\",\"__meta_kubernetes_pod_host_ip\":\"10.196.3.178\",\"__meta_kubernetes_pod_ip\":\"10.128.93.45\",\"__meta_kubernetes_pod_label_app\":\"packageserver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5fb6859686\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"packageserver-5fb6859686-2g8hx\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-1\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"f503b711-ed84-447c-ae2d-d9f748184e79\",\"__meta_kubernetes_service_name\":\"packageserver-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\"}},{\"discoveredLabels\":{\"__address__\":\"10.128.93.91:5443\",\"__meta_kubernetes_endpoint_address_target_kind\":\"Pod\",\"__meta_kubernetes_endpoint_address_target_name\":\"packageserver-5fb6859686-lcrw6\",\"__meta_kubernetes_endpoint_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_endpoint_port_name\":\"5443\",\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\",\"__meta_kubernetes_endpoint_ready\":\"true\",\"__meta_kubernetes_endpoints_name\":\"packageserver-service\",\"__meta_kubernetes_namespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotation_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.91\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:75:24:01\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_networks_status\":\"[{\\n \\\"name\\\": \\\"kuryr\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.128.93.91\\\"\\n ],\\n \\\"mac\\\": \\\"fa:16:3e:75:24:01\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\"__meta_kubernetes_pod_annotation_olm_operatorGroup\":\"olm-operators\",\"__meta_kubernetes_pod_annotation_olm_operatorNamespace\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olm_targetNamespaces\":\"openshift-operator-lifecycle-manager\",\"__meta_kubernetes_pod_annotation_olmcahash\":\"22e857e11f8fc8545f7b19e7b40f09deb38dbd5b268e26b89e90246b791afe7b\",\"__meta_kubernetes_pod_annotation_openshift_io_scc\":\"anyuid\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_ibm_cloud_managed\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_self_managed_high_availability\":\"true\",\"__meta_kubernetes_pod_annotationpresent_include_release_openshift_io_single_node_developer\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_networks_status\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorGroup\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_operatorNamespace\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olm_targetNamespaces\":\"true\",\"__meta_kubernetes_pod_annotationpresent_olmcahash\":\"true\",\"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\":\"true\",\"__meta_kubernetes_pod_container_name\":\"packageserver\",\"__meta_kubernetes_pod_container_port_number\":\"5443\",\"__meta_kubernetes_pod_container_port_protocol\":\"TCP\",\"__meta_kubernetes_pod_controller_kind\":\"ReplicaSet\",\"__meta_kubernetes_pod_controller_name\":\"packageserver-5fb6859686\",\"__meta_kubernetes_pod_host_ip\":\"10.196.0.105\",\"__meta_kubernetes_pod_ip\":\"10.128.93.91\",\"__meta_kubernetes_pod_label_app\":\"packageserver\",\"__meta_kubernetes_pod_label_pod_template_hash\":\"5fb6859686\",\"__meta_kubernetes_pod_labelpresent_app\":\"true\",\"__meta_kubernetes_pod_labelpresent_pod_template_hash\":\"true\",\"__meta_kubernetes_pod_name\":\"packageserver-5fb6859686-lcrw6\",\"__meta_kubernetes_pod_node_name\":\"ostest-n5rnf-master-0\",\"__meta_kubernetes_pod_phase\":\"Running\",\"__meta_kubernetes_pod_ready\":\"true\",\"__meta_kubernetes_pod_uid\":\"d594741c-595c-4b03-861d-b7f1ea727aeb\",\"__meta_kubernetes_service_name\":\"packageserver-service\",\"__metrics_path__\":\"/metrics\",\"__scheme__\":\"https\",\"job\":\"serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0\"}}]}}" STEP: verifying all expected jobs have a working target STEP: verifying standard metrics keys STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1 Oct 13 10:19:38.425: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:19:38.894: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:19:38.894: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1 Oct 13 10:19:38.894: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:19:39.294: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:19:39.298: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1 Oct 13 10:19:49.303: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:19:49.740: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:19:49.740: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1 Oct 13 10:19:49.740: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:19:50.460: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:19:50.461: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1 Oct 13 10:20:00.462: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:20:00.911: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:20:00.911: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1 Oct 13 10:20:00.911: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:20:01.315: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:20:01.316: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1 Oct 13 10:20:11.317: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:20:11.798: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:20:11.798: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1 Oct 13 10:20:11.799: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:20:12.190: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:20:12.190: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query template_router_reload_seconds_count{job="router-internal-default"} >= 1 Oct 13 10:20:22.191: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:20:22.593: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=template_router_reload_seconds_count%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:20:22.593: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query haproxy_server_up{job="router-internal-default"} >= 1 Oct 13 10:20:22.593: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-z4ls2 exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1"' Oct 13 10:20:23.044: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=haproxy_server_up%7Bjob%3D%22router-internal-default%22%7D+%3E%3D+1'\n" Oct 13 10:20:23.044: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" [AfterEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-prometheus-z4ls2". STEP: Found 6 events. Oct 13 10:20:33.105: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-z4ls2/execpod to ostest-n5rnf-worker-0-j4pkp Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_execpod_e2e-test-prometheus-z4ls2_06053816-a003-4d3a-a95d-f28fa95a0364_0(48fa4e40a85f79a1c4f39408ae14f4dedd374b491f0b4d0ec1c9f7d14cd6b18f): error adding pod e2e-test-prometheus-z4ls2_execpod to CNI network "multus-cni-network": [e2e-test-prometheus-z4ls2/execpod/06053816-a003-4d3a-a95d-f28fa95a0364:kuryr]: error adding container to network "kuryr": Looks like http://localhost:5036/addNetwork cannot be reached. Is kuryr-daemon running?; Post "http://localhost:5036/addNetwork": EOF Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:36 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.152.19/23] from kuryr Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:36 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:36 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container Oct 13 10:20:33.105: INFO: At 2022-10-13 10:19:36 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container Oct 13 10:20:33.111: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:20:33.111: INFO: execpod ostest-n5rnf-worker-0-j4pkp Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:43 +0000 UTC }] Oct 13 10:20:33.111: INFO: Oct 13 10:20:33.122: INFO: skipping dumping cluster info - cluster too large [AfterEach] [sig-instrumentation] Prometheus github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-prometheus-z4ls2" for this suite. fail [github.com/openshift/origin/test/extended/prometheus/prometheus.go:571]: Unexpected error: <errors.aggregate | len:2, cap:2>: [ { s: "promQL query returned unexpected results:\ntemplate_router_reload_seconds_count{job=\"router-internal-default\"} >= 1\n[]", }, { s: "promQL query returned unexpected results:\nhaproxy_server_up{job=\"router-internal-default\"} >= 1\n[]", }, ] [promQL query returned unexpected results: template_router_reload_seconds_count{job="router-internal-default"} >= 1 [], promQL query returned unexpected results: haproxy_server_up{job="router-internal-default"} >= 1 []] occurred
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:17:48.959: INFO: configPath is now "/tmp/configfile3520642642" Oct 13 10:17:48.959: INFO: The user is now "e2e-test-ns-global-lwdqb-user" Oct 13 10:17:48.959: INFO: Creating project "e2e-test-ns-global-lwdqb" Oct 13 10:17:49.297: INFO: Waiting on permissions in project "e2e-test-ns-global-lwdqb" ... Oct 13 10:17:49.305: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:17:49.500: INFO: Waiting for service account "default" secrets (default-token-wftgl) to include dockercfg/token ... Oct 13 10:17:49.615: INFO: Waiting for service account "default" secrets (default-token-wftgl) to include dockercfg/token ... Oct 13 10:17:49.712: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:17:49.842: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:17:49.965: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:17:49.977: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:17:50.081: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:17:50.908: INFO: Project "e2e-test-ns-global-lwdqb" has been fully provisioned. [BeforeEach] when using OpenshiftSDN in a mode that isolates namespaces by default github.com/openshift/origin/test/extended/networking/util.go:350 Oct 13 10:17:51.263: INFO: Could not check network plugin name: exit status 1. Assuming the OpenshiftSDN plugin is not being used Oct 13 10:17:51.263: INFO: This plugin does not isolate namespaces by default. [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] when using OpenshiftSDN in a mode that isolates namespaces by default k8s.io/kubernetes@v1.22.1/test/e2e/framework/framework.go:186 [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:17:51.334: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-ns-global-lwdqb-user}, err: <nil> Oct 13 10:17:51.471: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-ns-global-lwdqb}, err: <nil> Oct 13 10:17:51.630: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~LgbhHjQEB7Cs_1IOwk15Sbfp2IoL2tYEqYg_3FmGvqg}, err: <nil> [AfterEach] [sig-network] network isolation github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-ns-global-lwdqb" for this suite. skip [github.com/openshift/origin/test/extended/networking/util.go:352]: This plugin does not isolate namespaces by default.
skipped
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-builds][Feature:Builds] s2i build with a root user image github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-builds][Feature:Builds] s2i build with a root user image github.com/openshift/origin/test/extended/util/client.go:116 Oct 13 10:17:46.496: INFO: configPath is now "/tmp/configfile3534416200" Oct 13 10:17:46.496: INFO: The user is now "e2e-test-s2i-build-root-vpj8g-user" Oct 13 10:17:46.496: INFO: Creating project "e2e-test-s2i-build-root-vpj8g" Oct 13 10:17:46.705: INFO: Waiting on permissions in project "e2e-test-s2i-build-root-vpj8g" ... Oct 13 10:17:46.711: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:17:46.824: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:17:46.933: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:17:47.087: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:17:47.097: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:17:47.109: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:17:47.870: INFO: Project "e2e-test-s2i-build-root-vpj8g" has been fully provisioned. [It] should create a root build and fail without a privileged SCC [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/builds/s2i_root.go:35 [AfterEach] [sig-builds][Feature:Builds] s2i build with a root user image github.com/openshift/origin/test/extended/util/client.go:140 Oct 13 10:17:48.040: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-s2i-build-root-vpj8g-user}, err: <nil> Oct 13 10:17:48.129: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-s2i-build-root-vpj8g}, err: <nil> Oct 13 10:17:48.222: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~DpNKgoRTS0zpC9G7-gcQ2o2Y9QXBbkpkuz-y0VJTvHQ}, err: <nil> [AfterEach] [sig-builds][Feature:Builds] s2i build with a root user image github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-s2i-build-root-vpj8g" for this suite. skip [github.com/openshift/origin/test/extended/builds/s2i_root.go:36]: TODO: figure out why we aren't properly denying this, also consider whether we still need to deny it
fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:83]: Unexpected error: <errors.aggregate | len:1, cap:1>: [ { s: "promQL query returned unexpected results:\nopenshift_build_total{phase=\"Complete\"} >= 0\n[]", }, ] promQL query returned unexpected results: openshift_build_total{phase="Complete"} >= 0 [] occurred
[BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/framework.go:1453 [BeforeEach] [Top Level] github.com/openshift/origin/test/extended/util/test.go:61 [BeforeEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus github.com/openshift/origin/test/extended/util/client.go:142 STEP: Creating a kubernetes client [BeforeEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:27 [It] should start and expose a secured proxy and verify build metrics [Skipped:Disconnected] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:43 Oct 13 10:17:32.453: INFO: configPath is now "/tmp/configfile2484115495" Oct 13 10:17:32.453: INFO: The user is now "e2e-test-prometheus-dcjzj-user" Oct 13 10:17:32.454: INFO: Creating project "e2e-test-prometheus-dcjzj" Oct 13 10:17:32.656: INFO: Waiting on permissions in project "e2e-test-prometheus-dcjzj" ... Oct 13 10:17:32.665: INFO: Waiting for ServiceAccount "default" to be provisioned... Oct 13 10:17:32.773: INFO: Waiting for service account "default" secrets () to include dockercfg/token ... Oct 13 10:17:33.028: INFO: Waiting for service account "default" secrets () to include dockercfg/token ... Oct 13 10:17:33.106: INFO: Waiting for service account "default" secrets (default-token-w4mf7) to include dockercfg/token ... Oct 13 10:17:33.174: INFO: Waiting for ServiceAccount "deployer" to be provisioned... Oct 13 10:17:33.284: INFO: Waiting for ServiceAccount "builder" to be provisioned... Oct 13 10:17:33.392: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned... Oct 13 10:17:33.412: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned... Oct 13 10:17:33.422: INFO: Waiting for RoleBinding "system:deployers" to be provisioned... Oct 13 10:17:34.008: INFO: Project "e2e-test-prometheus-dcjzj" has been fully provisioned. Oct 13 10:17:34.013: INFO: Creating new exec pod STEP: verifying the oauth-proxy reports a 403 on the root URL Oct 13 10:18:26.767: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl -k -s -o /dev/null -w '%{http_code}' "https://thanos-querier.openshift-monitoring.svc:9091"' Oct 13 10:18:27.266: INFO: stderr: "+ curl -k -s -o /dev/null -w '%{http_code}' https://thanos-querier.openshift-monitoring.svc:9091\n" Oct 13 10:18:27.266: INFO: stdout: "403" STEP: verifying a service account token is able to authenticate Oct 13 10:18:27.266: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl -k -s -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' -o /dev/null -w '%{http_code}' "https://thanos-querier.openshift-monitoring.svc:9091/graph"' Oct 13 10:18:27.672: INFO: stderr: "+ curl -k -s -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' -o /dev/null -w '%{http_code}' https://thanos-querier.openshift-monitoring.svc:9091/graph\n" Oct 13 10:18:27.672: INFO: stdout: "200" STEP: calling oc create -f /tmp/fixture-testdata-dir1019657252/test/extended/testdata/builds/build-pruning/successful-build-config.yaml Oct 13 10:18:27.672: INFO: Running 'oc --kubeconfig=/tmp/configfile2484115495 create -f /tmp/fixture-testdata-dir1019657252/test/extended/testdata/builds/build-pruning/successful-build-config.yaml' W1013 10:18:27.752672 81811 shim_kubectl.go:55] Using non-groupfied API resources is deprecated and will be removed in a future release, update apiVersion to "build.openshift.io/v1" for your resource buildconfig.build.openshift.io/myphp created STEP: start build Oct 13 10:18:27.840: INFO: Running 'oc --kubeconfig=/tmp/configfile2484115495 start-build myphp -o=name' Oct 13 10:18:28.052: INFO: start-build output with args [myphp -o=name]: Error><nil> StdOut> build.build.openshift.io/myphp-1 StdErr> STEP: verifying build completed successfully Oct 13 10:18:28.054: INFO: Waiting for myphp-1 to complete Oct 13 10:19:24.099: INFO: Done waiting for myphp-1: util.BuildResult{BuildPath:"build.build.openshift.io/myphp-1", BuildName:"myphp-1", StartBuildStdErr:"", StartBuildStdOut:"build.build.openshift.io/myphp-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*v1.Build)(0xc001f76380), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc001af5440)} with error: <nil> STEP: verifying a service account token is able to query terminal build metrics from the Prometheus API STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0 Oct 13 10:19:24.100: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"' Oct 13 10:19:24.499: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n" Oct 13 10:19:24.499: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0 Oct 13 10:19:34.500: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"' Oct 13 10:19:34.873: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n" Oct 13 10:19:34.873: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0 Oct 13 10:19:44.874: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"' Oct 13 10:19:45.217: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n" Oct 13 10:19:45.217: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0 Oct 13 10:19:55.218: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"' Oct 13 10:19:55.604: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n" Oct 13 10:19:55.604: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" STEP: perform prometheus metric query openshift_build_total{phase="Complete"} >= 0 Oct 13 10:20:05.605: INFO: Running '/usr/bin/kubectl --server=https://api.ostest.shiftstack.com:6443 --kubeconfig=.kube/config --namespace=e2e-test-prometheus-dcjzj exec execpod -- /bin/sh -x -c curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0"' Oct 13 10:20:06.004: INFO: stderr: "+ curl --retry 15 --max-time 2 --retry-delay 1 -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im5pSWJWb0JabkhCY0Q4UHJCN21ueHAyeHc0X1JxWGdSOWdFRFc1QVRybGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tMjh3cnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTNhZDViNTYtN2UzMi00N2QxLTllZDYtODNjODQxMjYwZTNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.mVezmo2zBsxWki0DCsDosK_aHWISjMeN-wWw-H5TYxohQgapEvHUEGdlGayjhz4ezpiGa0Tcehj6d-VKuSlOXQbhMWq9l5bNKIrFL8NxJV7lQCkxibY7XfILHZ4ynaYoObWXfqpNsLNeDVqcDqYSA4kjL1_hU8U77tRL0dBmN3nWh3Tu3ZRBTQOUdD5D_wUkh9SujW7YOR6D-q-TgDxKZcrlPbJYOaXTRm83WPd1mkJu3Wl7yxA4JA2zBO4-4p37Vad17u7qeDV1C-EGX_6atkTO2HerycaZiXUFNPFicFPAMCq_g9laR8G0LdWrUnry77-C4B7jyEqrMyCRvCAsIA' 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=openshift_build_total%7Bphase%3D%22Complete%22%7D+%3E%3D+0'\n" Oct 13 10:20:06.004: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}\n" [AfterEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus github.com/openshift/origin/test/extended/util/client.go:140 STEP: Collecting events from namespace "e2e-test-prometheus-dcjzj". STEP: Found 15 events. Oct 13 10:20:16.068: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod: { } Scheduled: Successfully assigned e2e-test-prometheus-dcjzj/execpod to ostest-n5rnf-worker-0-j4pkp Oct 13 10:20:16.068: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for myphp-1-build: { } Scheduled: Successfully assigned e2e-test-prometheus-dcjzj/myphp-1-build to ostest-n5rnf-worker-0-8kq82 Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:23 +0000 UTC - event for execpod: {multus } AddedInterface: Add eth0 [10.128.187.17/23] from kuryr Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" already present on machine Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Created: Created container agnhost-container Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:23 +0000 UTC - event for execpod: {kubelet ostest-n5rnf-worker-0-j4pkp} Started: Started container agnhost-container Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {multus } AddedInterface: Add eth0 [10.128.186.69/23] from kuryr Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container manage-dockerfile Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container manage-dockerfile Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:48 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Pulled: Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de0c97dc2ac29bf429903d03c728bbb8d603486893b1b1d05f8c26f7a8fb2917" already present on machine Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:49 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Created: Created container docker-build Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:49 +0000 UTC - event for myphp-1-build: {kubelet ostest-n5rnf-worker-0-8kq82} Started: Started container docker-build Oct 13 10:20:16.068: INFO: At 2022-10-13 10:18:50 +0000 UTC - event for myphp-1: {build-controller } BuildStarted: Build e2e-test-prometheus-dcjzj/myphp-1 is now running Oct 13 10:20:16.068: INFO: At 2022-10-13 10:19:19 +0000 UTC - event for myphp-1: {build-controller } BuildCompleted: Build e2e-test-prometheus-dcjzj/myphp-1 completed successfully Oct 13 10:20:16.077: INFO: POD NODE PHASE GRACE CONDITIONS Oct 13 10:20:16.077: INFO: execpod ostest-n5rnf-worker-0-j4pkp Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:17:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:17:34 +0000 UTC }] Oct 13 10:20:16.078: INFO: myphp-1-build ostest-n5rnf-worker-0-8kq82 Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:48 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:17 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:19:17 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-13 10:18:28 +0000 UTC }] Oct 13 10:20:16.078: INFO: Oct 13 10:20:16.091: INFO: skipping dumping cluster info - cluster too large Oct 13 10:20:16.129: INFO: Deleted {user.openshift.io/v1, Resource=users e2e-test-prometheus-dcjzj-user}, err: <nil> Oct 13 10:20:16.167: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-prometheus-dcjzj}, err: <nil> Oct 13 10:20:16.182: INFO: Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~DzUTs0iWE0gz2vesoS3bvZCiTtPeo6t3oFXhJFU28AQ}, err: <nil> [AfterEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus github.com/openshift/origin/test/extended/util/client.go:141 STEP: Destroying namespace "e2e-test-prometheus-dcjzj" for this suite. [AfterEach] [sig-instrumentation][sig-builds][Feature:Builds] Prometheus github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:35 Oct 13 10:20:16.198: INFO: Dumping pod state for namespace openshift-monitoring Oct 13 10:20:16.198: INFO: Running 'oc --kubeconfig=.kube/config get pods -n openshift-monitoring -o yaml' Oct 13 10:20:16.551: INFO: apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.161" ], "mac": "fa:16:3e:67:65:2e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.161" ], "mac": "fa:16:3e:67:65:2e", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: alertmanager openshift.io/scc: nonroot creationTimestamp: "2022-10-11T16:30:08Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: alertmanager-main- labels: alertmanager: main app: alertmanager app.kubernetes.io/component: alert-router app.kubernetes.io/instance: main app.kubernetes.io/managed-by: prometheus-operator app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 0.22.2 controller-revision-hash: alertmanager-main-78c6b7cbfb statefulset.kubernetes.io/pod-name: alertmanager-main-0 name: alertmanager-main-0 namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: StatefulSet name: alertmanager-main uid: f8b4c687-5618-400d-b669-305f7d140ea2 resourceVersion: "62295" uid: 0ba17a85-c575-4eef-ac90-9d8610a62ff3 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: openshift-monitoring namespaces: - openshift-monitoring topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - --config.file=/etc/alertmanager/config/alertmanager.yaml - --storage.path=/alertmanager - --data.retention=120h - --cluster.listen-address=[$(POD_IP)]:9094 - --web.listen-address=127.0.0.1:9093 - --web.external-url=https://alertmanager-main-openshift-monitoring.apps.ostest.shiftstack.com/ - --web.route-prefix=/ - --cluster.peer=alertmanager-main-0.alertmanager-operated:9094 - --cluster.peer=alertmanager-main-1.alertmanager-operated:9094 - --cluster.peer=alertmanager-main-2.alertmanager-operated:9094 - --cluster.reconnect-timeout=5m env: - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 imagePullPolicy: IfNotPresent name: alertmanager ports: - containerPort: 9094 name: mesh-tcp protocol: TCP - containerPort: 9094 name: mesh-udp protocol: UDP resources: requests: cpu: 4m memory: 40Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/alertmanager/config name: config-volume - mountPath: /etc/alertmanager/certs name: tls-assets readOnly: true - mountPath: /alertmanager name: alertmanager-main-db - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls name: secret-alertmanager-main-tls readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy name: secret-alertmanager-main-proxy readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy readOnly: true - mountPath: /etc/pki/ca-trust/extracted/pem/ name: alertmanager-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-75ndq readOnly: true - args: - --listen-address=localhost:8080 - --reload-url=http://localhost:9093/-/reload - --watched-dir=/etc/alertmanager/config - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy - --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy command: - /bin/prometheus-config-reloader env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SHARD value: "-1" image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imagePullPolicy: IfNotPresent name: config-reloader resources: requests: cpu: 1m memory: 10Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/alertmanager/config name: config-volume readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls name: secret-alertmanager-main-tls readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy name: secret-alertmanager-main-proxy readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-75ndq readOnly: true - args: - -provider=openshift - -https-address=:9095 - -http-address= - -email-domain=* - -upstream=http://localhost:9093 - '-openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers", "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "resourceName": "non-existant"}]' - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}, "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "name": "non-existant"}}' - -tls-cert=/etc/tls/private/tls.crt - -tls-key=/etc/tls/private/tls.key - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token - -cookie-secret-file=/etc/proxy/secrets/session_secret - -openshift-service-account=alertmanager-main - -openshift-ca=/etc/pki/tls/cert.pem - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt env: - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imagePullPolicy: IfNotPresent name: alertmanager-proxy ports: - containerPort: 9095 name: web protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-alertmanager-main-tls - mountPath: /etc/proxy/secrets name: secret-alertmanager-main-proxy - mountPath: /etc/pki/ca-trust/extracted/pem/ name: alertmanager-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-75ndq readOnly: true - args: - --secure-listen-address=0.0.0.0:9092 - --upstream=http://127.0.0.1:9096 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --v=10 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9092 name: tenancy protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy - mountPath: /etc/tls/private name: secret-alertmanager-main-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-75ndq readOnly: true - args: - --insecure-listen-address=127.0.0.1:9096 - --upstream=http://127.0.0.1:9093 - --label=namespace image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imagePullPolicy: IfNotPresent name: prom-label-proxy resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-75ndq readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: alertmanager-main-0 imagePullSecrets: - name: alertmanager-main-dockercfg-b785d nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65534 runAsNonRoot: true runAsUser: 65534 seLinuxOptions: level: s0:c21,c0 serviceAccount: alertmanager-main serviceAccountName: alertmanager-main subdomain: alertmanager-operated terminationGracePeriodSeconds: 120 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: config-volume secret: defaultMode: 420 secretName: alertmanager-main-generated - name: tls-assets secret: defaultMode: 420 secretName: alertmanager-main-tls-assets - name: secret-alertmanager-main-tls secret: defaultMode: 420 secretName: alertmanager-main-tls - name: secret-alertmanager-main-proxy secret: defaultMode: 420 secretName: alertmanager-main-proxy - name: secret-alertmanager-kube-rbac-proxy secret: defaultMode: 420 secretName: alertmanager-kube-rbac-proxy - emptyDir: {} name: alertmanager-main-db - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: alertmanager-trusted-ca-bundle-2rsonso43rc5p optional: true name: alertmanager-trusted-ca-bundle - name: kube-api-access-75ndq projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:09Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:30Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:30Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:08Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://2c9dcfd6ff72bb1a3aac33b967479d1bf17da0911acaada66f7ee25938f4f973 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 lastState: {} name: alertmanager ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:28Z" - containerID: cri-o://c73085e1f0c21e8cbf861fa42d414ee13fac9636a43a6ae27715cae491fbacb2 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 lastState: {} name: alertmanager-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:29Z" - containerID: cri-o://bec2afaece9da480c2297ff78358bcc3fbac33847189692589310eb7e243de93 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc lastState: {} name: config-reloader ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:29Z" - containerID: cri-o://0753f97687e0d3fa23ec28e8f92d5bfbbfc205aa76d51a8212a26b525a62de9a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:29Z" - containerID: cri-o://93ba2aa6f1ebd510c3cc6674ecc1ed6416c2e264603432727f8c15c339d9dc1f image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 lastState: {} name: prom-label-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:30Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.23.161 podIPs: - ip: 10.128.23.161 qosClass: Burstable startTime: "2022-10-11T16:30:09Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.112" ], "mac": "fa:16:3e:ac:eb:00", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.112" ], "mac": "fa:16:3e:ac:eb:00", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: alertmanager openshift.io/scc: nonroot creationTimestamp: "2022-10-11T16:30:09Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: alertmanager-main- labels: alertmanager: main app: alertmanager app.kubernetes.io/component: alert-router app.kubernetes.io/instance: main app.kubernetes.io/managed-by: prometheus-operator app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 0.22.2 controller-revision-hash: alertmanager-main-78c6b7cbfb statefulset.kubernetes.io/pod-name: alertmanager-main-1 name: alertmanager-main-1 namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: StatefulSet name: alertmanager-main uid: f8b4c687-5618-400d-b669-305f7d140ea2 resourceVersion: "62270" uid: 02c4ad64-a941-442b-9c8b-620db031f91a spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: openshift-monitoring namespaces: - openshift-monitoring topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - --config.file=/etc/alertmanager/config/alertmanager.yaml - --storage.path=/alertmanager - --data.retention=120h - --cluster.listen-address=[$(POD_IP)]:9094 - --web.listen-address=127.0.0.1:9093 - --web.external-url=https://alertmanager-main-openshift-monitoring.apps.ostest.shiftstack.com/ - --web.route-prefix=/ - --cluster.peer=alertmanager-main-0.alertmanager-operated:9094 - --cluster.peer=alertmanager-main-1.alertmanager-operated:9094 - --cluster.peer=alertmanager-main-2.alertmanager-operated:9094 - --cluster.reconnect-timeout=5m env: - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 imagePullPolicy: IfNotPresent name: alertmanager ports: - containerPort: 9094 name: mesh-tcp protocol: TCP - containerPort: 9094 name: mesh-udp protocol: UDP resources: requests: cpu: 4m memory: 40Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/alertmanager/config name: config-volume - mountPath: /etc/alertmanager/certs name: tls-assets readOnly: true - mountPath: /alertmanager name: alertmanager-main-db - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls name: secret-alertmanager-main-tls readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy name: secret-alertmanager-main-proxy readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy readOnly: true - mountPath: /etc/pki/ca-trust/extracted/pem/ name: alertmanager-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fxcmh readOnly: true - args: - --listen-address=localhost:8080 - --reload-url=http://localhost:9093/-/reload - --watched-dir=/etc/alertmanager/config - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy - --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy command: - /bin/prometheus-config-reloader env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SHARD value: "-1" image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imagePullPolicy: IfNotPresent name: config-reloader resources: requests: cpu: 1m memory: 10Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/alertmanager/config name: config-volume readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls name: secret-alertmanager-main-tls readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy name: secret-alertmanager-main-proxy readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fxcmh readOnly: true - args: - -provider=openshift - -https-address=:9095 - -http-address= - -email-domain=* - -upstream=http://localhost:9093 - '-openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers", "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "resourceName": "non-existant"}]' - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}, "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "name": "non-existant"}}' - -tls-cert=/etc/tls/private/tls.crt - -tls-key=/etc/tls/private/tls.key - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token - -cookie-secret-file=/etc/proxy/secrets/session_secret - -openshift-service-account=alertmanager-main - -openshift-ca=/etc/pki/tls/cert.pem - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt env: - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imagePullPolicy: IfNotPresent name: alertmanager-proxy ports: - containerPort: 9095 name: web protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-alertmanager-main-tls - mountPath: /etc/proxy/secrets name: secret-alertmanager-main-proxy - mountPath: /etc/pki/ca-trust/extracted/pem/ name: alertmanager-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fxcmh readOnly: true - args: - --secure-listen-address=0.0.0.0:9092 - --upstream=http://127.0.0.1:9096 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --v=10 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9092 name: tenancy protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy - mountPath: /etc/tls/private name: secret-alertmanager-main-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fxcmh readOnly: true - args: - --insecure-listen-address=127.0.0.1:9096 - --upstream=http://127.0.0.1:9093 - --label=namespace image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imagePullPolicy: IfNotPresent name: prom-label-proxy resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fxcmh readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: alertmanager-main-1 imagePullSecrets: - name: alertmanager-main-dockercfg-b785d nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65534 runAsNonRoot: true runAsUser: 65534 seLinuxOptions: level: s0:c21,c0 serviceAccount: alertmanager-main serviceAccountName: alertmanager-main subdomain: alertmanager-operated terminationGracePeriodSeconds: 120 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: config-volume secret: defaultMode: 420 secretName: alertmanager-main-generated - name: tls-assets secret: defaultMode: 420 secretName: alertmanager-main-tls-assets - name: secret-alertmanager-main-tls secret: defaultMode: 420 secretName: alertmanager-main-tls - name: secret-alertmanager-main-proxy secret: defaultMode: 420 secretName: alertmanager-main-proxy - name: secret-alertmanager-kube-rbac-proxy secret: defaultMode: 420 secretName: alertmanager-kube-rbac-proxy - emptyDir: {} name: alertmanager-main-db - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: alertmanager-trusted-ca-bundle-2rsonso43rc5p optional: true name: alertmanager-trusted-ca-bundle - name: kube-api-access-fxcmh projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:09Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:29Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:29Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:09Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://e366e9418471733e9646d38f8002bde25fc9418fd8ee0ee88520f1762496e02b image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 lastState: {} name: alertmanager ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:25Z" - containerID: cri-o://98a4290d0c4eb18ebe95954ae1df3f5918a709a3a86ef465e0b0e9349caf8c77 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 lastState: {} name: alertmanager-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:26Z" - containerID: cri-o://fffeabe3dfa30d557255c407401c645a2a5693cdba786b0847e21ebd959a2a02 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc lastState: {} name: config-reloader ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:26Z" - containerID: cri-o://10b9b9bcb478411359a06ddd0fec2974ee46ba41a895f2818ce1421ec9a42931 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:28Z" - containerID: cri-o://88b08f0610a8357f4e4f78ce0030241d16e4109d85994c819482d5547277838e image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 lastState: {} name: prom-label-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:28Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.22.112 podIPs: - ip: 10.128.22.112 qosClass: Burstable startTime: "2022-10-11T16:30:09Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.138" ], "mac": "fa:16:3e:d9:01:ce", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.138" ], "mac": "fa:16:3e:d9:01:ce", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: alertmanager openshift.io/scc: nonroot creationTimestamp: "2022-10-11T16:30:09Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: alertmanager-main- labels: alertmanager: main app: alertmanager app.kubernetes.io/component: alert-router app.kubernetes.io/instance: main app.kubernetes.io/managed-by: prometheus-operator app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 0.22.2 controller-revision-hash: alertmanager-main-78c6b7cbfb statefulset.kubernetes.io/pod-name: alertmanager-main-2 name: alertmanager-main-2 namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: StatefulSet name: alertmanager-main uid: f8b4c687-5618-400d-b669-305f7d140ea2 resourceVersion: "62077" uid: 5be3b096-5513-4dec-92ac-ea79e3e74e38 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: openshift-monitoring namespaces: - openshift-monitoring topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - --config.file=/etc/alertmanager/config/alertmanager.yaml - --storage.path=/alertmanager - --data.retention=120h - --cluster.listen-address=[$(POD_IP)]:9094 - --web.listen-address=127.0.0.1:9093 - --web.external-url=https://alertmanager-main-openshift-monitoring.apps.ostest.shiftstack.com/ - --web.route-prefix=/ - --cluster.peer=alertmanager-main-0.alertmanager-operated:9094 - --cluster.peer=alertmanager-main-1.alertmanager-operated:9094 - --cluster.peer=alertmanager-main-2.alertmanager-operated:9094 - --cluster.reconnect-timeout=5m env: - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 imagePullPolicy: IfNotPresent name: alertmanager ports: - containerPort: 9094 name: mesh-tcp protocol: TCP - containerPort: 9094 name: mesh-udp protocol: UDP resources: requests: cpu: 4m memory: 40Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/alertmanager/config name: config-volume - mountPath: /etc/alertmanager/certs name: tls-assets readOnly: true - mountPath: /alertmanager name: alertmanager-main-db - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls name: secret-alertmanager-main-tls readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy name: secret-alertmanager-main-proxy readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy readOnly: true - mountPath: /etc/pki/ca-trust/extracted/pem/ name: alertmanager-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6jzhb readOnly: true - args: - --listen-address=localhost:8080 - --reload-url=http://localhost:9093/-/reload - --watched-dir=/etc/alertmanager/config - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls - --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy - --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy command: - /bin/prometheus-config-reloader env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SHARD value: "-1" image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imagePullPolicy: IfNotPresent name: config-reloader resources: requests: cpu: 1m memory: 10Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/alertmanager/config name: config-volume readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-tls name: secret-alertmanager-main-tls readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-main-proxy name: secret-alertmanager-main-proxy readOnly: true - mountPath: /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6jzhb readOnly: true - args: - -provider=openshift - -https-address=:9095 - -http-address= - -email-domain=* - -upstream=http://localhost:9093 - '-openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers", "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "resourceName": "non-existant"}]' - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}, "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "name": "non-existant"}}' - -tls-cert=/etc/tls/private/tls.crt - -tls-key=/etc/tls/private/tls.key - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token - -cookie-secret-file=/etc/proxy/secrets/session_secret - -openshift-service-account=alertmanager-main - -openshift-ca=/etc/pki/tls/cert.pem - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt env: - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imagePullPolicy: IfNotPresent name: alertmanager-proxy ports: - containerPort: 9095 name: web protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-alertmanager-main-tls - mountPath: /etc/proxy/secrets name: secret-alertmanager-main-proxy - mountPath: /etc/pki/ca-trust/extracted/pem/ name: alertmanager-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6jzhb readOnly: true - args: - --secure-listen-address=0.0.0.0:9092 - --upstream=http://127.0.0.1:9096 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --v=10 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9092 name: tenancy protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/kube-rbac-proxy name: secret-alertmanager-kube-rbac-proxy - mountPath: /etc/tls/private name: secret-alertmanager-main-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6jzhb readOnly: true - args: - --insecure-listen-address=127.0.0.1:9096 - --upstream=http://127.0.0.1:9093 - --label=namespace image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imagePullPolicy: IfNotPresent name: prom-label-proxy resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6jzhb readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: alertmanager-main-2 imagePullSecrets: - name: alertmanager-main-dockercfg-b785d nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65534 runAsNonRoot: true runAsUser: 65534 seLinuxOptions: level: s0:c21,c0 serviceAccount: alertmanager-main serviceAccountName: alertmanager-main subdomain: alertmanager-operated terminationGracePeriodSeconds: 120 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: config-volume secret: defaultMode: 420 secretName: alertmanager-main-generated - name: tls-assets secret: defaultMode: 420 secretName: alertmanager-main-tls-assets - name: secret-alertmanager-main-tls secret: defaultMode: 420 secretName: alertmanager-main-tls - name: secret-alertmanager-main-proxy secret: defaultMode: 420 secretName: alertmanager-main-proxy - name: secret-alertmanager-kube-rbac-proxy secret: defaultMode: 420 secretName: alertmanager-kube-rbac-proxy - emptyDir: {} name: alertmanager-main-db - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: alertmanager-trusted-ca-bundle-2rsonso43rc5p optional: true name: alertmanager-trusted-ca-bundle - name: kube-api-access-6jzhb projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:09Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:14Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:14Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:09Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://fd6940ed75d13e58641fb3c2625a74f1444f998c57011e96a3664f1887f54afa image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 lastState: {} name: alertmanager ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:48Z" - containerID: cri-o://deef806f883f372089822366aa7ea339fe6d225a75b6371e90d53c7502a1949e image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 lastState: {} name: alertmanager-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:02Z" - containerID: cri-o://c64ab4656d7a8fbca79b3b3553464fcc721387667879bec2d3ad83496e133a78 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc lastState: {} name: config-reloader ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:48Z" - containerID: cri-o://e6e5a3a23d8d54102c2f5cf0d2e9da477fd2ee238ca40a7b0bd3d83244c07a6b image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:02Z" - containerID: cri-o://e4062977155fa4dfe12941f515f944c73e386a9d8b5cef335d6f033fc3f0a57f image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 lastState: {} name: prom-label-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:13Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.23.138 podIPs: - ip: 10.128.23.138 qosClass: Burstable startTime: "2022-10-11T16:30:09Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.49" ], "mac": "fa:16:3e:5b:b3:60", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.49" ], "mac": "fa:16:3e:5b:b3:60", "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2022-10-11T16:09:08Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: cluster-monitoring-operator-79d65bfd5b- labels: app: cluster-monitoring-operator pod-template-hash: 79d65bfd5b name: cluster-monitoring-operator-79d65bfd5b-pntd6 namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: cluster-monitoring-operator-79d65bfd5b uid: 6c319834-bf5f-411b-a63c-b07c34d9783d resourceVersion: "8726" uid: 83ae671b-d09b-4541-b74f-673d9bbdf563 spec: containers: - args: - --logtostderr - --secure-listen-address=:8443 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:8080/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 8443 name: https protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: cluster-monitoring-operator-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6544w readOnly: true - args: - -namespace=openshift-monitoring - -namespace-user-workload=openshift-user-workload-monitoring - -configmap=cluster-monitoring-config - -release-version=$(RELEASE_VERSION) - -logtostderr=true - -v=2 - -images=prometheus-operator=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62caff9b13ff229d124b2cb633699775684a348b573f6a6f07bd6f4039b7b0f5 - -images=prometheus-config-reloader=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc - -images=configmap-reloader=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef81374b8f5eeb48afccfcd316f6fe440b8628a2b7d0784c5326419771f368a1 - -images=prometheus=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf - -images=alertmanager=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e69a516ffad17a17c60ac452c505c9c147b5cdea72badb2fdf0693afc8919437 - -images=grafana=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40b0f08ccbe5fa16770c8a6bc71404d50685a52d4cef6c13c3e81d065ec3f91c - -images=oauth-proxy=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 - -images=node-exporter=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd - -images=kube-state-metrics=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f68265d31fd49cee8b9d93b26de237588b0b73a7defae45a2682ef379863b16 - -images=openshift-state-metrics=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f8a93508f2307e7a083d5507f3a76351c26b2e69452209f06885dbafa660dc5 - -images=kube-rbac-proxy=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c - -images=telemeter-client=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a3f86f1b302389d805f18271a6d00cb2e8b6e9c4a859f9f20aa6d0c4f574371 - -images=prom-label-proxy=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 - -images=k8s-prometheus-adapter=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1 - -images=thanos=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 env: - name: RELEASE_VERSION value: 4.9.0-0.nightly-2022-10-10-022606 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a098108c7f005b4a61829b504ab09fd1af8039f293c68474d2420284fcd467d6 imagePullPolicy: IfNotPresent name: cluster-monitoring-operator resources: requests: cpu: 10m memory: 75Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/cluster-monitoring-operator/telemetry name: telemetry-config - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6544w readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ostest-n5rnf-master-0 nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/master: "" preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: cluster-monitoring-operator serviceAccountName: cluster-monitoring-operator terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 volumes: - configMap: defaultMode: 420 name: telemetry-config name: telemetry-config - name: cluster-monitoring-operator-tls secret: defaultMode: 420 optional: true secretName: cluster-monitoring-operator-tls - name: kube-api-access-6544w projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:12:18Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:35Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:35Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:12:17Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://d9bc38f29bf1f312876371c81edaff39007954ef588d63610656e38378b1929e image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a098108c7f005b4a61829b504ab09fd1af8039f293c68474d2420284fcd467d6 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a098108c7f005b4a61829b504ab09fd1af8039f293c68474d2420284fcd467d6 lastState: {} name: cluster-monitoring-operator ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:14:09Z" - containerID: cri-o://0e192890a816235784b71f17ee1d0b73c3e92e989e7481491719e4ee0206fd0a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: terminated: containerID: cri-o://e1cb7a10016bca43327e253a74f4b6b2546cdf3284a847b77a0d08b71247c34a exitCode: 255 finishedAt: "2022-10-11T16:14:54Z" message: "I1011 16:14:54.418031 1 main.go:181] Valid token audiences: \nI1011 16:14:54.418189 1 main.go:305] Reading certificate files\nF1011 16:14:54.418229 1 main.go:309] Failed to initialize certificate reloader: error loading certificates: error loading certificate: open /etc/tls/private/tls.crt: no such file or directory\ngoroutine 1 [running]:\nk8s.io/klog/v2.stacks(0xc0000c4001, 0xc0004f6000, 0xc6, 0x1c8)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:996 +0xb9\nk8s.io/klog/v2.(*loggingT).output(0x229c320, 0xc000000003, 0x0, 0x0, 0xc0001e4770, 0x1c0063b, 0x7, 0x135, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:945 +0x191\nk8s.io/klog/v2.(*loggingT).printf(0x229c320, 0x3, 0x0, 0x0, 0x176d0d9, 0x2d, 0xc000499c38, 0x1, 0x1)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:733 +0x17a\nk8s.io/klog/v2.Fatalf(...)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1463\nmain.main()\n\t/go/src/github.com/brancz/kube-rbac-proxy/main.go:309 +0x21f8\n\ngoroutine 18 [chan receive]:\nk8s.io/klog/v2.(*loggingT).flushDaemon(0x229c320)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b\ncreated by k8s.io/klog/v2.init.0\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416 +0xd8\n" reason: Error startedAt: "2022-10-11T16:14:54Z" name: kube-rbac-proxy ready: true restartCount: 4 started: true state: running: startedAt: "2022-10-11T16:15:35Z" hostIP: 10.196.0.105 phase: Running podIP: 10.128.23.49 podIPs: - ip: 10.128.23.49 qosClass: Burstable startTime: "2022-10-11T16:12:18Z" - apiVersion: v1 kind: Pod metadata: annotations: checksum/grafana-config: bcf6fd722b2c76f194401f4b8e20d0af checksum/grafana-datasources: ae625c50302c7e8068dc3600dbd686cc k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.230" ], "mac": "fa:16:3e:d1:2a:fb", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.230" ], "mac": "fa:16:3e:d1:2a:fb", "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2022-10-11T16:30:10Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: grafana-7c5c5fb5b6- labels: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 7.5.11 pod-template-hash: 7c5c5fb5b6 name: grafana-7c5c5fb5b6-cht4p namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: grafana-7c5c5fb5b6 uid: 779121cb-12a9-4091-a906-7df12c28c1b7 resourceVersion: "61707" uid: 59162dd9-267d-4146-bca6-ddbdc3930d01 spec: containers: - args: - -config=/etc/grafana/grafana.ini image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40b0f08ccbe5fa16770c8a6bc71404d50685a52d4cef6c13c3e81d065ec3f91c imagePullPolicy: IfNotPresent name: grafana ports: - containerPort: 3001 name: http protocol: TCP resources: requests: cpu: 4m memory: 64Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/grafana name: grafana-storage - mountPath: /etc/grafana/provisioning/datasources name: grafana-datasources - mountPath: /etc/grafana/provisioning/dashboards name: grafana-dashboards - mountPath: /grafana-dashboard-definitions/0/cluster-total name: grafana-dashboard-cluster-total - mountPath: /grafana-dashboard-definitions/0/etcd name: grafana-dashboard-etcd - mountPath: /grafana-dashboard-definitions/0/k8s-resources-cluster name: grafana-dashboard-k8s-resources-cluster - mountPath: /grafana-dashboard-definitions/0/k8s-resources-namespace name: grafana-dashboard-k8s-resources-namespace - mountPath: /grafana-dashboard-definitions/0/k8s-resources-node name: grafana-dashboard-k8s-resources-node - mountPath: /grafana-dashboard-definitions/0/k8s-resources-pod name: grafana-dashboard-k8s-resources-pod - mountPath: /grafana-dashboard-definitions/0/k8s-resources-workload name: grafana-dashboard-k8s-resources-workload - mountPath: /grafana-dashboard-definitions/0/k8s-resources-workloads-namespace name: grafana-dashboard-k8s-resources-workloads-namespace - mountPath: /grafana-dashboard-definitions/0/namespace-by-pod name: grafana-dashboard-namespace-by-pod - mountPath: /grafana-dashboard-definitions/0/node-cluster-rsrc-use name: grafana-dashboard-node-cluster-rsrc-use - mountPath: /grafana-dashboard-definitions/0/node-rsrc-use name: grafana-dashboard-node-rsrc-use - mountPath: /grafana-dashboard-definitions/0/pod-total name: grafana-dashboard-pod-total - mountPath: /grafana-dashboard-definitions/0/prometheus name: grafana-dashboard-prometheus - mountPath: /etc/grafana name: grafana-config - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pxsmk readOnly: true - args: - -provider=openshift - -https-address=:3000 - -http-address= - -email-domain=* - -upstream=http://localhost:3001 - '-openshift-sar={"resource": "namespaces", "verb": "get"}' - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}' - -tls-cert=/etc/tls/private/tls.crt - -tls-key=/etc/tls/private/tls.key - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token - -cookie-secret-file=/etc/proxy/secrets/session_secret - -openshift-service-account=grafana - -openshift-ca=/etc/pki/tls/cert.pem - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt env: - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imagePullPolicy: IfNotPresent name: grafana-proxy ports: - containerPort: 3000 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /oauth/healthz port: https scheme: HTTPS periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/tls/private name: secret-grafana-tls - mountPath: /etc/proxy/secrets name: secret-grafana-proxy - mountPath: /etc/pki/ca-trust/extracted/pem/ name: grafana-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pxsmk readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: grafana-dockercfg-9vtxq nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: grafana serviceAccountName: grafana terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: grafana-storage - name: grafana-datasources secret: defaultMode: 420 secretName: grafana-datasources - configMap: defaultMode: 420 name: grafana-dashboards name: grafana-dashboards - configMap: defaultMode: 420 name: grafana-dashboard-cluster-total name: grafana-dashboard-cluster-total - configMap: defaultMode: 420 name: grafana-dashboard-etcd name: grafana-dashboard-etcd - configMap: defaultMode: 420 name: grafana-dashboard-k8s-resources-cluster name: grafana-dashboard-k8s-resources-cluster - configMap: defaultMode: 420 name: grafana-dashboard-k8s-resources-namespace name: grafana-dashboard-k8s-resources-namespace - configMap: defaultMode: 420 name: grafana-dashboard-k8s-resources-node name: grafana-dashboard-k8s-resources-node - configMap: defaultMode: 420 name: grafana-dashboard-k8s-resources-pod name: grafana-dashboard-k8s-resources-pod - configMap: defaultMode: 420 name: grafana-dashboard-k8s-resources-workload name: grafana-dashboard-k8s-resources-workload - configMap: defaultMode: 420 name: grafana-dashboard-k8s-resources-workloads-namespace name: grafana-dashboard-k8s-resources-workloads-namespace - configMap: defaultMode: 420 name: grafana-dashboard-namespace-by-pod name: grafana-dashboard-namespace-by-pod - configMap: defaultMode: 420 name: grafana-dashboard-node-cluster-rsrc-use name: grafana-dashboard-node-cluster-rsrc-use - configMap: defaultMode: 420 name: grafana-dashboard-node-rsrc-use name: grafana-dashboard-node-rsrc-use - configMap: defaultMode: 420 name: grafana-dashboard-pod-total name: grafana-dashboard-pod-total - configMap: defaultMode: 420 name: grafana-dashboard-prometheus name: grafana-dashboard-prometheus - name: grafana-config secret: defaultMode: 420 secretName: grafana-config - name: secret-grafana-tls secret: defaultMode: 420 secretName: grafana-tls - name: secret-grafana-proxy secret: defaultMode: 420 secretName: grafana-proxy - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: grafana-trusted-ca-bundle-2rsonso43rc5p optional: true name: grafana-trusted-ca-bundle - name: kube-api-access-pxsmk projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:10Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:03Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:03Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:10Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://9e715273e4cebed3a936917501575a378dfbcc8b7f76aaeb5970fde74bad2ebc image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40b0f08ccbe5fa16770c8a6bc71404d50685a52d4cef6c13c3e81d065ec3f91c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40b0f08ccbe5fa16770c8a6bc71404d50685a52d4cef6c13c3e81d065ec3f91c lastState: {} name: grafana ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:02Z" - containerID: cri-o://dad6ae57fad580b2f39380be96742bce1def9f9079e1baf2fe8c0f52ac6071af image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 lastState: {} name: grafana-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:03Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.22.230 podIPs: - ip: 10.128.22.230 qosClass: Burstable startTime: "2022-10-11T16:30:10Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.45" ], "mac": "fa:16:3e:68:1e:0a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.45" ], "mac": "fa:16:3e:68:1e:0a", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: kube-state-metrics openshift.io/scc: restricted creationTimestamp: "2022-10-11T16:14:59Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: kube-state-metrics-754df74859- labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: kube-state-metrics app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 2.0.0 pod-template-hash: 754df74859 name: kube-state-metrics-754df74859-w8k5h namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: kube-state-metrics-754df74859 uid: 114e9015-6830-4d58-bb6d-cc5d7c4427af resourceVersion: "61212" uid: cb715a58-6c73-45b7-ad0e-f96ecd04c1e5 spec: containers: - args: - --host=127.0.0.1 - --port=8081 - --telemetry-host=127.0.0.1 - --telemetry-port=8082 - --metric-denylist=kube_secret_labels - --metric-labels-allowlist=pods=[*],nodes=[*],namespaces=[*],persistentvolumes=[*],persistentvolumeclaims=[*] - | --metric-denylist= kube_.+_created, kube_.+_metadata_resource_version, kube_replicaset_metadata_generation, kube_replicaset_status_observed_generation, kube_pod_restart_policy, kube_pod_init_container_status_terminated, kube_pod_init_container_status_running, kube_pod_container_status_terminated, kube_pod_container_status_running, kube_pod_completion_time, kube_pod_status_scheduled image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f68265d31fd49cee8b9d93b26de237588b0b73a7defae45a2682ef379863b16 imagePullPolicy: IfNotPresent name: kube-state-metrics resources: requests: cpu: 2m memory: 80Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp name: volume-directive-shadow - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2k9gg readOnly: true - args: - --logtostderr - --secure-listen-address=:8443 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:8081/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy-main ports: - containerPort: 8443 name: https-main protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: kube-state-metrics-tls - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2k9gg readOnly: true - args: - --logtostderr - --secure-listen-address=:9443 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:8082/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy-self ports: - containerPort: 9443 name: https-self protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: kube-state-metrics-tls - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2k9gg readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: kube-state-metrics serviceAccountName: kube-state-metrics terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: volume-directive-shadow - name: kube-state-metrics-tls secret: defaultMode: 420 secretName: kube-state-metrics-tls - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: kube-api-access-2k9gg projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:52Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:38Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:38Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:52Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://2bc4e8a0a8586d3fb8d893efdc6953e8255fb2cc8696b28ff9f46a3601a39442 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy-main ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:23Z" - containerID: cri-o://2d9cf2111e56c0641bbd9fbc36903c69e746944b1ee8bbe61d29cdd47d3adef0 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy-self ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:36Z" - containerID: cri-o://e7e7a842a335cb2835376b93d73312c8ccb3783f186415f04953caa194604422 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f68265d31fd49cee8b9d93b26de237588b0b73a7defae45a2682ef379863b16 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f68265d31fd49cee8b9d93b26de237588b0b73a7defae45a2682ef379863b16 lastState: {} name: kube-state-metrics ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:22Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.22.45 podIPs: - ip: 10.128.22.45 qosClass: Burstable startTime: "2022-10-11T16:29:52Z" - apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: node-exporter creationTimestamp: "2022-10-11T16:29:42Z" generateName: node-exporter- labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 1.1.2 controller-revision-hash: 7f9b7bd8b5 pod-template-generation: "1" name: node-exporter-7cn6l namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: node-exporter uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b resourceVersion: "60893" uid: 6abaa413-0438-48a2-add5-04718c115244 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - ostest-n5rnf-worker-0-j4pkp containers: - args: - --web.listen-address=127.0.0.1:9100 - --path.sysfs=/host/sys - --path.rootfs=/host/root - --no-collector.wifi - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$ - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$ - --collector.cpu.info - --collector.textfile.directory=/var/node_exporter/textfile - --no-collector.cpufreq image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: node-exporter resources: requests: cpu: 8m memory: 32Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /host/sys mountPropagation: HostToContainer name: sys readOnly: true - mountPath: /host/root mountPropagation: HostToContainer name: root readOnly: true - mountPath: /var/node_exporter/textfile name: node-exporter-textfile readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rn22c readOnly: true workingDir: /var/node_exporter/textfile - args: - --logtostderr - --secure-listen-address=[$(IP)]:9100 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:9100/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt env: - name: IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9100 hostPort: 9100 name: https protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: node-exporter-tls - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rn22c readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true hostPID: true imagePullSecrets: - name: node-exporter-dockercfg-d64pg initContainers: - command: - /bin/sh - -c - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init -perm /111 -type f -exec {} \;' env: - name: TMPDIR value: /tmp image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: init-textfile resources: requests: cpu: 1m memory: 1Mi securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/node_exporter/textfile name: node-exporter-textfile - mountPath: /var/log/wtmp name: node-exporter-wtmp readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rn22c readOnly: true workingDir: /var/node_exporter/textfile nodeName: ostest-n5rnf-worker-0-j4pkp nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: node-exporter serviceAccountName: node-exporter terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /sys type: "" name: sys - hostPath: path: / type: "" name: root - emptyDir: {} name: node-exporter-textfile - name: node-exporter-tls secret: defaultMode: 420 secretName: node-exporter-tls - hostPath: path: /var/log/wtmp type: File name: node-exporter-wtmp - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: kube-api-access-rn22c projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:52Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:23Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:23Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:42Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://7ef6ac436d272d70676ed277caef23f19c00c8417a2bc96126e6700fa76d6feb image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:23Z" - containerID: cri-o://8bce7ab90066cc6dc9fe7a5f6459772c1ba2c8c4e057583ab8e8d4f8707eb36a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: node-exporter ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:00Z" hostIP: 10.196.0.199 initContainerStatuses: - containerID: cri-o://5404ad006e61510210f3f1ee208b588d3cdd985728da5a937026c0c3d61fa5fa image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: init-textfile ready: true restartCount: 0 state: terminated: containerID: cri-o://5404ad006e61510210f3f1ee208b588d3cdd985728da5a937026c0c3d61fa5fa exitCode: 0 finishedAt: "2022-10-11T16:29:52Z" reason: Completed startedAt: "2022-10-11T16:29:51Z" phase: Running podIP: 10.196.0.199 podIPs: - ip: 10.196.0.199 qosClass: Burstable startTime: "2022-10-11T16:29:43Z" - apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: node-exporter creationTimestamp: "2022-10-11T16:31:11Z" generateName: node-exporter- labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 1.1.2 controller-revision-hash: 7f9b7bd8b5 pod-template-generation: "1" name: node-exporter-7n85z namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: node-exporter uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b resourceVersion: "62880" uid: e520f6ac-f247-4e36-a129-d0b4f724c1a3 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - ostest-n5rnf-worker-0-8kq82 containers: - args: - --web.listen-address=127.0.0.1:9100 - --path.sysfs=/host/sys - --path.rootfs=/host/root - --no-collector.wifi - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$ - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$ - --collector.cpu.info - --collector.textfile.directory=/var/node_exporter/textfile - --no-collector.cpufreq image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: node-exporter resources: requests: cpu: 8m memory: 32Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /host/sys mountPropagation: HostToContainer name: sys readOnly: true - mountPath: /host/root mountPropagation: HostToContainer name: root readOnly: true - mountPath: /var/node_exporter/textfile name: node-exporter-textfile readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-7drvz readOnly: true workingDir: /var/node_exporter/textfile - args: - --logtostderr - --secure-listen-address=[$(IP)]:9100 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:9100/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt env: - name: IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9100 hostPort: 9100 name: https protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: node-exporter-tls - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-7drvz readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true hostPID: true imagePullSecrets: - name: node-exporter-dockercfg-d64pg initContainers: - command: - /bin/sh - -c - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init -perm /111 -type f -exec {} \;' env: - name: TMPDIR value: /tmp image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: init-textfile resources: requests: cpu: 1m memory: 1Mi securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/node_exporter/textfile name: node-exporter-textfile - mountPath: /var/log/wtmp name: node-exporter-wtmp readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-7drvz readOnly: true workingDir: /var/node_exporter/textfile nodeName: ostest-n5rnf-worker-0-8kq82 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: node-exporter serviceAccountName: node-exporter terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /sys type: "" name: sys - hostPath: path: / type: "" name: root - emptyDir: {} name: node-exporter-textfile - name: node-exporter-tls secret: defaultMode: 420 secretName: node-exporter-tls - hostPath: path: /var/log/wtmp type: File name: node-exporter-wtmp - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: kube-api-access-7drvz projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:57Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:32:10Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:32:10Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:12Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://68b6b4d6b4aa09b8e9ca3954cd9442da1a5d97db75730f3c1256d48aeeac1505 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:32:10Z" - containerID: cri-o://6cc945323e091d0db19e5a717fe18395e1ef45fef020dd6f6d93f8a6bdc705dd image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: node-exporter ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:58Z" hostIP: 10.196.2.72 initContainerStatuses: - containerID: cri-o://bba884c1b85e67cce00e3169715b99c67c94bfdf76c6e493e714680629b153d1 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: init-textfile ready: true restartCount: 0 state: terminated: containerID: cri-o://bba884c1b85e67cce00e3169715b99c67c94bfdf76c6e493e714680629b153d1 exitCode: 0 finishedAt: "2022-10-11T16:31:57Z" reason: Completed startedAt: "2022-10-11T16:31:57Z" phase: Running podIP: 10.196.2.72 podIPs: - ip: 10.196.2.72 qosClass: Burstable startTime: "2022-10-11T16:31:43Z" - apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: node-exporter creationTimestamp: "2022-10-11T16:14:59Z" generateName: node-exporter- labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 1.1.2 controller-revision-hash: 7f9b7bd8b5 pod-template-generation: "1" name: node-exporter-dlzvz namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: node-exporter uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b resourceVersion: "7424" uid: 053a3770-cf8f-4156-bd99-3d8ad58a3f16 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - ostest-n5rnf-master-1 containers: - args: - --web.listen-address=127.0.0.1:9100 - --path.sysfs=/host/sys - --path.rootfs=/host/root - --no-collector.wifi - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$ - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$ - --collector.cpu.info - --collector.textfile.directory=/var/node_exporter/textfile - --no-collector.cpufreq image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: node-exporter resources: requests: cpu: 8m memory: 32Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /host/sys mountPropagation: HostToContainer name: sys readOnly: true - mountPath: /host/root mountPropagation: HostToContainer name: root readOnly: true - mountPath: /var/node_exporter/textfile name: node-exporter-textfile readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ldk97 readOnly: true workingDir: /var/node_exporter/textfile - args: - --logtostderr - --secure-listen-address=[$(IP)]:9100 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:9100/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt env: - name: IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9100 hostPort: 9100 name: https protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: node-exporter-tls - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ldk97 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true hostPID: true initContainers: - command: - /bin/sh - -c - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init -perm /111 -type f -exec {} \;' env: - name: TMPDIR value: /tmp image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: init-textfile resources: requests: cpu: 1m memory: 1Mi securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/node_exporter/textfile name: node-exporter-textfile - mountPath: /var/log/wtmp name: node-exporter-wtmp readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ldk97 readOnly: true workingDir: /var/node_exporter/textfile nodeName: ostest-n5rnf-master-1 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: node-exporter serviceAccountName: node-exporter terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /sys type: "" name: sys - hostPath: path: / type: "" name: root - emptyDir: {} name: node-exporter-textfile - name: node-exporter-tls secret: defaultMode: 420 secretName: node-exporter-tls - hostPath: path: /var/log/wtmp type: File name: node-exporter-wtmp - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: kube-api-access-ldk97 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:07Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:08Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:08Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:14:59Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://5373299b125193fa5b727225158ec0ab6a0250777a9c85ab33e3ea749e13dac9 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:15:07Z" - containerID: cri-o://d3d461cfa8b306c9cc0bd5cbb850d134aa35d7b1a48f3f34e5253fee6cfe9e5b image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: node-exporter ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:15:07Z" hostIP: 10.196.3.178 initContainerStatuses: - containerID: cri-o://dba0ea8292079f2252e506cfea37c6d5b090192b53ad2c9736889832e75144b5 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: init-textfile ready: true restartCount: 0 state: terminated: containerID: cri-o://dba0ea8292079f2252e506cfea37c6d5b090192b53ad2c9736889832e75144b5 exitCode: 0 finishedAt: "2022-10-11T16:15:06Z" reason: Completed startedAt: "2022-10-11T16:15:06Z" phase: Running podIP: 10.196.3.178 podIPs: - ip: 10.196.3.178 qosClass: Burstable startTime: "2022-10-11T16:14:59Z" - apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: node-exporter creationTimestamp: "2022-10-11T16:29:01Z" generateName: node-exporter- labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 1.1.2 controller-revision-hash: 7f9b7bd8b5 pod-template-generation: "1" name: node-exporter-fvjvs namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: node-exporter uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b resourceVersion: "59128" uid: 958a88c3-9530-40ea-93bc-364e7b008d04 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - ostest-n5rnf-worker-0-94fxs containers: - args: - --web.listen-address=127.0.0.1:9100 - --path.sysfs=/host/sys - --path.rootfs=/host/root - --no-collector.wifi - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$ - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$ - --collector.cpu.info - --collector.textfile.directory=/var/node_exporter/textfile - --no-collector.cpufreq image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: node-exporter resources: requests: cpu: 8m memory: 32Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /host/sys mountPropagation: HostToContainer name: sys readOnly: true - mountPath: /host/root mountPropagation: HostToContainer name: root readOnly: true - mountPath: /var/node_exporter/textfile name: node-exporter-textfile readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4t982 readOnly: true workingDir: /var/node_exporter/textfile - args: - --logtostderr - --secure-listen-address=[$(IP)]:9100 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:9100/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt env: - name: IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9100 hostPort: 9100 name: https protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: node-exporter-tls - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4t982 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true hostPID: true imagePullSecrets: - name: node-exporter-dockercfg-d64pg initContainers: - command: - /bin/sh - -c - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init -perm /111 -type f -exec {} \;' env: - name: TMPDIR value: /tmp image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: init-textfile resources: requests: cpu: 1m memory: 1Mi securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/node_exporter/textfile name: node-exporter-textfile - mountPath: /var/log/wtmp name: node-exporter-wtmp readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4t982 readOnly: true workingDir: /var/node_exporter/textfile nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: node-exporter serviceAccountName: node-exporter terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /sys type: "" name: sys - hostPath: path: / type: "" name: root - emptyDir: {} name: node-exporter-textfile - name: node-exporter-tls secret: defaultMode: 420 secretName: node-exporter-tls - hostPath: path: /var/log/wtmp type: File name: node-exporter-wtmp - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: kube-api-access-4t982 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:10Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:26Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:26Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:02Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://f74b9cf71d559ebcde03172d54fb8a03dba5d82fdc1b9cc67b90d0c114bd3c49 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:29:26Z" - containerID: cri-o://fc83935f5205d1369f82c357893afec8b561f0101fea50dee1c92546ef6fe6f7 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: node-exporter ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:29:10Z" hostIP: 10.196.2.169 initContainerStatuses: - containerID: cri-o://a43e7f6354f638f721d6b91cf1d6809d487f411b25272d590874bd79b40ea251 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: init-textfile ready: true restartCount: 0 state: terminated: containerID: cri-o://a43e7f6354f638f721d6b91cf1d6809d487f411b25272d590874bd79b40ea251 exitCode: 0 finishedAt: "2022-10-11T16:29:10Z" reason: Completed startedAt: "2022-10-11T16:29:09Z" phase: Running podIP: 10.196.2.169 podIPs: - ip: 10.196.2.169 qosClass: Burstable startTime: "2022-10-11T16:29:02Z" - apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: node-exporter creationTimestamp: "2022-10-11T16:14:59Z" generateName: node-exporter- labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 1.1.2 controller-revision-hash: 7f9b7bd8b5 pod-template-generation: "1" name: node-exporter-g96tz namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: node-exporter uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b resourceVersion: "7398" uid: 238be02b-d34b-4005-94a3-e900dadfb56b spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - ostest-n5rnf-master-2 containers: - args: - --web.listen-address=127.0.0.1:9100 - --path.sysfs=/host/sys - --path.rootfs=/host/root - --no-collector.wifi - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$ - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$ - --collector.cpu.info - --collector.textfile.directory=/var/node_exporter/textfile - --no-collector.cpufreq image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: node-exporter resources: requests: cpu: 8m memory: 32Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /host/sys mountPropagation: HostToContainer name: sys readOnly: true - mountPath: /host/root mountPropagation: HostToContainer name: root readOnly: true - mountPath: /var/node_exporter/textfile name: node-exporter-textfile readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-dg9wx readOnly: true workingDir: /var/node_exporter/textfile - args: - --logtostderr - --secure-listen-address=[$(IP)]:9100 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:9100/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt env: - name: IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9100 hostPort: 9100 name: https protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: node-exporter-tls - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-dg9wx readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true hostPID: true initContainers: - command: - /bin/sh - -c - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init -perm /111 -type f -exec {} \;' env: - name: TMPDIR value: /tmp image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: init-textfile resources: requests: cpu: 1m memory: 1Mi securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/node_exporter/textfile name: node-exporter-textfile - mountPath: /var/log/wtmp name: node-exporter-wtmp readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-dg9wx readOnly: true workingDir: /var/node_exporter/textfile nodeName: ostest-n5rnf-master-2 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: node-exporter serviceAccountName: node-exporter terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /sys type: "" name: sys - hostPath: path: / type: "" name: root - emptyDir: {} name: node-exporter-textfile - name: node-exporter-tls secret: defaultMode: 420 secretName: node-exporter-tls - hostPath: path: /var/log/wtmp type: File name: node-exporter-wtmp - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: kube-api-access-dg9wx projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:06Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:07Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:07Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:14:59Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://4515a68e11fbcf83c92ca4670136f5c0ed6c8070a8290f30e48612aaa652e8f3 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:15:06Z" - containerID: cri-o://2c585a82c9b96cb30ca8c16ed49abec4bc4a66d69d19369978173b2f2ea836c5 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: node-exporter ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:15:06Z" hostIP: 10.196.3.187 initContainerStatuses: - containerID: cri-o://d5cb7d9c128b19de4497b7ad6a16b1b8e4bc98326327c7d284b712e364afc31a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: init-textfile ready: true restartCount: 0 state: terminated: containerID: cri-o://d5cb7d9c128b19de4497b7ad6a16b1b8e4bc98326327c7d284b712e364afc31a exitCode: 0 finishedAt: "2022-10-11T16:15:06Z" reason: Completed startedAt: "2022-10-11T16:15:06Z" phase: Running podIP: 10.196.3.187 podIPs: - ip: 10.196.3.187 qosClass: Burstable startTime: "2022-10-11T16:14:59Z" - apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: node-exporter creationTimestamp: "2022-10-11T16:14:59Z" generateName: node-exporter- labels: app.kubernetes.io/component: exporter app.kubernetes.io/name: node-exporter app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 1.1.2 controller-revision-hash: 7f9b7bd8b5 pod-template-generation: "1" name: node-exporter-p5vmg namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: node-exporter uid: 1c5a828f-03e7-40ed-b41f-3f430088ee4b resourceVersion: "7818" uid: b8ff8622-729e-4729-a7e7-8697864e6d5a spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - ostest-n5rnf-master-0 containers: - args: - --web.listen-address=127.0.0.1:9100 - --path.sysfs=/host/sys - --path.rootfs=/host/root - --no-collector.wifi - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) - --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$ - --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$ - --collector.cpu.info - --collector.textfile.directory=/var/node_exporter/textfile - --no-collector.cpufreq image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: node-exporter resources: requests: cpu: 8m memory: 32Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /host/sys mountPropagation: HostToContainer name: sys readOnly: true - mountPath: /host/root mountPropagation: HostToContainer name: root readOnly: true - mountPath: /var/node_exporter/textfile name: node-exporter-textfile readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-l4vzn readOnly: true workingDir: /var/node_exporter/textfile - args: - --logtostderr - --secure-listen-address=[$(IP)]:9100 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:9100/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt env: - name: IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9100 hostPort: 9100 name: https protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: node-exporter-tls - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-l4vzn readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true hostPID: true initContainers: - command: - /bin/sh - -c - '[[ ! -d /node_exporter/collectors/init ]] || find /node_exporter/collectors/init -perm /111 -type f -exec {} \;' env: - name: TMPDIR value: /tmp image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imagePullPolicy: IfNotPresent name: init-textfile resources: requests: cpu: 1m memory: 1Mi securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/node_exporter/textfile name: node-exporter-textfile - mountPath: /var/log/wtmp name: node-exporter-wtmp readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-l4vzn readOnly: true workingDir: /var/node_exporter/textfile nodeName: ostest-n5rnf-master-0 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: node-exporter serviceAccountName: node-exporter terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /sys type: "" name: sys - hostPath: path: / type: "" name: root - emptyDir: {} name: node-exporter-textfile - name: node-exporter-tls secret: defaultMode: 420 secretName: node-exporter-tls - hostPath: path: /var/log/wtmp type: File name: node-exporter-wtmp - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: kube-api-access-l4vzn projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:12Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:13Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:15:13Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:14:59Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://f3450a061fd7c1856256b6a277071ae96f823b86648ea227e9b385b84b9beb33 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:15:13Z" - containerID: cri-o://f2654a5ddb3243c9c4bec1f33d5aa787d0479c1c74638d985503b7fb085660f5 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: node-exporter ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:15:12Z" hostIP: 10.196.0.105 initContainerStatuses: - containerID: cri-o://fd6867f1a4b365181be0913d90bb089fdd37800bf5c8d0a19a2f69459710ae56 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae220ca64854debaffce3ea2ea6712768684afba229190eb2e8fec317b9b64dd lastState: {} name: init-textfile ready: true restartCount: 0 state: terminated: containerID: cri-o://fd6867f1a4b365181be0913d90bb089fdd37800bf5c8d0a19a2f69459710ae56 exitCode: 0 finishedAt: "2022-10-11T16:15:11Z" reason: Completed startedAt: "2022-10-11T16:15:11Z" phase: Running podIP: 10.196.0.105 podIPs: - ip: 10.196.0.105 qosClass: Burstable startTime: "2022-10-11T16:14:59Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.89" ], "mac": "fa:16:3e:88:c2:40", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.89" ], "mac": "fa:16:3e:88:c2:40", "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2022-10-11T16:14:59Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: openshift-state-metrics-c59c784c4- labels: k8s-app: openshift-state-metrics pod-template-hash: c59c784c4 name: openshift-state-metrics-c59c784c4-f5f7v namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: openshift-state-metrics-c59c784c4 uid: e98067fb-b51e-4f67-bae7-2d67107bbb6d resourceVersion: "62759" uid: f3277e62-2a87-4978-8163-8b1023dc4f80 spec: containers: - args: - --logtostderr - --secure-listen-address=:8443 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:8081/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy-main ports: - containerPort: 8443 name: https-main protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/tls/private name: openshift-state-metrics-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6t86l readOnly: true - args: - --logtostderr - --secure-listen-address=:9443 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=http://127.0.0.1:8082/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy-self ports: - containerPort: 9443 name: https-self protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/tls/private name: openshift-state-metrics-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6t86l readOnly: true - args: - --host=127.0.0.1 - --port=8081 - --telemetry-host=127.0.0.1 - --telemetry-port=8082 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f8a93508f2307e7a083d5507f3a76351c26b2e69452209f06885dbafa660dc5 imagePullPolicy: IfNotPresent name: openshift-state-metrics resources: requests: cpu: 1m memory: 32Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-6t86l readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: openshift-state-metrics serviceAccountName: openshift-state-metrics terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: openshift-state-metrics-tls secret: defaultMode: 420 secretName: openshift-state-metrics-tls - name: kube-api-access-6t86l projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:52Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:32:01Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:32:01Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:52Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://24152310400c510959a71b9305b4b856a49b342c3cf5a553d58f5492b367432a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy-main ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:47Z" - containerID: cri-o://9095dc1a211202ee760c13c86dda869eb8eaf5925be748d567fddf853dc01e80 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy-self ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:47Z" - containerID: cri-o://3f17d69b2b40ed701829b086a69ea9f6e380b6a6fd584e7fbc34d3dfb736dc0e image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f8a93508f2307e7a083d5507f3a76351c26b2e69452209f06885dbafa660dc5 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f8a93508f2307e7a083d5507f3a76351c26b2e69452209f06885dbafa660dc5 lastState: {} name: openshift-state-metrics ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:32:01Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.22.89 podIPs: - ip: 10.128.22.89 qosClass: Burstable startTime: "2022-10-11T16:29:52Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.77" ], "mac": "fa:16:3e:2f:75:3e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.77" ], "mac": "fa:16:3e:2f:75:3e", "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2022-10-12T16:07:54Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: prometheus-adapter-86cfd468f7- labels: app.kubernetes.io/component: metrics-adapter app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 0.9.0 pod-template-hash: 86cfd468f7 name: prometheus-adapter-86cfd468f7-blrxn namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: prometheus-adapter-86cfd468f7 uid: 23d342f4-13a5-46b1-94b2-e71701e2ca51 resourceVersion: "478940" uid: 2f70ccee-4ec5-4082-bc22-22487e4f5ab9 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: metrics-adapter app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/part-of: openshift-monitoring namespaces: - openshift-monitoring topologyKey: kubernetes.io/hostname containers: - args: - --prometheus-auth-config=/etc/prometheus-config/prometheus-config.yaml - --config=/etc/adapter/config.yaml - --logtostderr=true - --metrics-relist-interval=1m - --prometheus-url=https://prometheus-k8s.openshift-monitoring.svc:9091 - --secure-port=6443 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --client-ca-file=/etc/tls/private/client-ca-file - --requestheader-client-ca-file=/etc/tls/private/requestheader-client-ca-file - --requestheader-allowed-names=kube-apiserver-proxy,system:kube-apiserver-proxy,system:openshift-aggregator - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1 imagePullPolicy: IfNotPresent name: prometheus-adapter ports: - containerPort: 6443 protocol: TCP resources: requests: cpu: 1m memory: 40Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /tmp name: tmpfs - mountPath: /etc/adapter name: config - mountPath: /etc/prometheus-config name: prometheus-adapter-prometheus-config - mountPath: /etc/ssl/certs name: serving-certs-ca-bundle - mountPath: /etc/tls/private name: tls readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-cvvtz readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: prometheus-adapter-dockercfg-pqjk2 nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: prometheus-adapter serviceAccountName: prometheus-adapter terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: tmpfs - configMap: defaultMode: 420 name: adapter-config name: config - configMap: defaultMode: 420 name: prometheus-adapter-prometheus-config name: prometheus-adapter-prometheus-config - configMap: defaultMode: 420 name: serving-certs-ca-bundle name: serving-certs-ca-bundle - name: tls secret: defaultMode: 420 secretName: prometheus-adapter-5so9dfn4gvaug - name: kube-api-access-cvvtz projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-12T16:07:55Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-12T16:07:59Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-12T16:07:59Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-12T16:07:55Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://b5b3c0b7b390149fbdcad12d47890d1ed17958ba4010ceec0e0ec1fb8525387d image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1 lastState: {} name: prometheus-adapter ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-12T16:07:58Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.23.77 podIPs: - ip: 10.128.23.77 qosClass: Burstable startTime: "2022-10-12T16:07:55Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.82" ], "mac": "fa:16:3e:aa:12:f1", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.82" ], "mac": "fa:16:3e:aa:12:f1", "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2022-10-12T16:07:53Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: prometheus-adapter-86cfd468f7- labels: app.kubernetes.io/component: metrics-adapter app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 0.9.0 pod-template-hash: 86cfd468f7 name: prometheus-adapter-86cfd468f7-qbb4b namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: prometheus-adapter-86cfd468f7 uid: 23d342f4-13a5-46b1-94b2-e71701e2ca51 resourceVersion: "478902" uid: 5d160ed9-a15a-44c3-b06d-a183f82d6629 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: metrics-adapter app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/part-of: openshift-monitoring namespaces: - openshift-monitoring topologyKey: kubernetes.io/hostname containers: - args: - --prometheus-auth-config=/etc/prometheus-config/prometheus-config.yaml - --config=/etc/adapter/config.yaml - --logtostderr=true - --metrics-relist-interval=1m - --prometheus-url=https://prometheus-k8s.openshift-monitoring.svc:9091 - --secure-port=6443 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --client-ca-file=/etc/tls/private/client-ca-file - --requestheader-client-ca-file=/etc/tls/private/requestheader-client-ca-file - --requestheader-allowed-names=kube-apiserver-proxy,system:kube-apiserver-proxy,system:openshift-aggregator - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1 imagePullPolicy: IfNotPresent name: prometheus-adapter ports: - containerPort: 6443 protocol: TCP resources: requests: cpu: 1m memory: 40Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /tmp name: tmpfs - mountPath: /etc/adapter name: config - mountPath: /etc/prometheus-config name: prometheus-adapter-prometheus-config - mountPath: /etc/ssl/certs name: serving-certs-ca-bundle - mountPath: /etc/tls/private name: tls readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-sjd7t readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: prometheus-adapter-dockercfg-pqjk2 nodeName: ostest-n5rnf-worker-0-8kq82 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: prometheus-adapter serviceAccountName: prometheus-adapter terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: tmpfs - configMap: defaultMode: 420 name: adapter-config name: config - configMap: defaultMode: 420 name: prometheus-adapter-prometheus-config name: prometheus-adapter-prometheus-config - configMap: defaultMode: 420 name: serving-certs-ca-bundle name: serving-certs-ca-bundle - name: tls secret: defaultMode: 420 secretName: prometheus-adapter-5so9dfn4gvaug - name: kube-api-access-sjd7t projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-12T16:07:54Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-12T16:07:57Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-12T16:07:57Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-12T16:07:53Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://75874a5802148d8935e94787143d4a44b49b9e80a30ca396bcabf4c151a3c913 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:daa7987ac7a58985faf2b1b269e947cdaad212ec732de737d9f260c1dab050a1 lastState: {} name: prometheus-adapter ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-12T16:07:57Z" hostIP: 10.196.2.72 phase: Running podIP: 10.128.23.82 podIPs: - ip: 10.128.23.82 qosClass: Burstable startTime: "2022-10-12T16:07:54Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.18" ], "mac": "fa:16:3e:ff:39:16", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.18" ], "mac": "fa:16:3e:ff:39:16", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: prometheus openshift.io/scc: nonroot creationTimestamp: "2022-10-11T16:46:10Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: prometheus-k8s- labels: app: prometheus app.kubernetes.io/component: prometheus app.kubernetes.io/instance: k8s app.kubernetes.io/managed-by: prometheus-operator app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 2.29.2 controller-revision-hash: prometheus-k8s-77f9b66476 operator.prometheus.io/name: k8s operator.prometheus.io/shard: "0" prometheus: k8s statefulset.kubernetes.io/pod-name: prometheus-k8s-0 name: prometheus-k8s-0 namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: StatefulSet name: prometheus-k8s uid: 0cf40d35-afcd-411c-af5e-48a33a70f1b0 resourceVersion: "68355" uid: 57e33cf7-4412-4bfe-b728-d95159125d5b spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: prometheus app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: openshift-monitoring prometheus: k8s namespaces: - openshift-monitoring topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - --web.console.templates=/etc/prometheus/consoles - --web.console.libraries=/etc/prometheus/console_libraries - --config.file=/etc/prometheus/config_out/prometheus.env.yaml - --storage.tsdb.path=/prometheus - --storage.tsdb.retention.time=15d - --web.enable-lifecycle - --web.external-url=https://prometheus-k8s-openshift-monitoring.apps.ostest.shiftstack.com/ - --web.route-prefix=/ - --web.listen-address=127.0.0.1:9090 - --web.config.file=/etc/prometheus/web_config/web-config.yaml image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf imagePullPolicy: IfNotPresent name: prometheus readinessProbe: exec: command: - sh - -c - if [ -x "$(command -v curl)" ]; then exec curl http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi failureThreshold: 120 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 3 resources: requests: cpu: 70m memory: 1Gi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/pki/ca-trust/extracted/pem/ name: prometheus-trusted-ca-bundle readOnly: true - mountPath: /etc/prometheus/config_out name: config-out readOnly: true - mountPath: /etc/prometheus/certs name: tls-assets readOnly: true - mountPath: /prometheus name: prometheus-k8s-db subPath: prometheus-db - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0 name: prometheus-k8s-rulefiles-0 - mountPath: /etc/prometheus/web_config/web-config.yaml name: web-config readOnly: true subPath: web-config.yaml - mountPath: /etc/prometheus/secrets/kube-etcd-client-certs name: secret-kube-etcd-client-certs readOnly: true - mountPath: /etc/prometheus/secrets/prometheus-k8s-tls name: secret-prometheus-k8s-tls readOnly: true - mountPath: /etc/prometheus/secrets/prometheus-k8s-proxy name: secret-prometheus-k8s-proxy readOnly: true - mountPath: /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls name: secret-prometheus-k8s-thanos-sidecar-tls readOnly: true - mountPath: /etc/prometheus/secrets/kube-rbac-proxy name: secret-kube-rbac-proxy readOnly: true - mountPath: /etc/prometheus/secrets/metrics-client-certs name: secret-metrics-client-certs readOnly: true - mountPath: /etc/prometheus/configmaps/serving-certs-ca-bundle name: configmap-serving-certs-ca-bundle readOnly: true - mountPath: /etc/prometheus/configmaps/kubelet-serving-ca-bundle name: configmap-kubelet-serving-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gqzck readOnly: true - args: - --listen-address=localhost:8080 - --reload-url=http://localhost:9090/-/reload - --config-file=/etc/prometheus/config/prometheus.yaml.gz - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 command: - /bin/prometheus-config-reloader env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SHARD value: "0" image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imagePullPolicy: IfNotPresent name: config-reloader resources: requests: cpu: 1m memory: 10Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/prometheus/config name: config - mountPath: /etc/prometheus/config_out name: config-out - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0 name: prometheus-k8s-rulefiles-0 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gqzck readOnly: true - args: - sidecar - --prometheus.url=http://localhost:9090/ - --tsdb.path=/prometheus - --grpc-address=[$(POD_IP)]:10901 - --http-address=127.0.0.1:10902 - --grpc-server-tls-cert=/etc/tls/grpc/server.crt - --grpc-server-tls-key=/etc/tls/grpc/server.key - --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt env: - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 imagePullPolicy: IfNotPresent name: thanos-sidecar ports: - containerPort: 10902 name: http protocol: TCP - containerPort: 10901 name: grpc protocol: TCP resources: requests: cpu: 1m memory: 25Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/grpc name: secret-grpc-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gqzck readOnly: true - args: - -provider=openshift - -https-address=:9091 - -http-address= - -email-domain=* - -upstream=http://localhost:9090 - -openshift-service-account=prometheus-k8s - '-openshift-sar={"resource": "namespaces", "verb": "get"}' - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}' - -tls-cert=/etc/tls/private/tls.crt - -tls-key=/etc/tls/private/tls.key - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token - -cookie-secret-file=/etc/proxy/secrets/session_secret - -openshift-ca=/etc/pki/tls/cert.pem - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt - -htpasswd-file=/etc/proxy/htpasswd/auth env: - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imagePullPolicy: IfNotPresent name: prometheus-proxy ports: - containerPort: 9091 name: web protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-prometheus-k8s-tls - mountPath: /etc/proxy/secrets name: secret-prometheus-k8s-proxy - mountPath: /etc/proxy/htpasswd name: secret-prometheus-k8s-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: prometheus-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gqzck readOnly: true - args: - --secure-listen-address=0.0.0.0:9092 - --upstream=http://127.0.0.1:9095 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --v=10 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9092 name: tenancy protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-prometheus-k8s-tls - mountPath: /etc/kube-rbac-proxy name: secret-kube-rbac-proxy - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gqzck readOnly: true - args: - --insecure-listen-address=127.0.0.1:9095 - --upstream=http://127.0.0.1:9090 - --label=namespace image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imagePullPolicy: IfNotPresent name: prom-label-proxy resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gqzck readOnly: true - args: - --secure-listen-address=[$(POD_IP)]:10902 - --upstream=http://127.0.0.1:10902 - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --allow-paths=/metrics - --logtostderr=true - --client-ca-file=/etc/tls/client/client-ca.crt env: - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy-thanos ports: - containerPort: 10902 name: thanos-proxy protocol: TCP resources: requests: cpu: 1m memory: 10Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-prometheus-k8s-thanos-sidecar-tls - mountPath: /etc/tls/client name: metrics-client-ca readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gqzck readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: prometheus-k8s-0 imagePullSecrets: - name: prometheus-k8s-dockercfg-f5qm8 initContainers: - args: - --watch-interval=0 - --listen-address=:8080 - --config-file=/etc/prometheus/config/prometheus.yaml.gz - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 command: - /bin/prometheus-config-reloader env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SHARD value: "0" image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imagePullPolicy: IfNotPresent name: init-config-reloader resources: requests: cpu: 100m memory: 50Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/prometheus/config name: config - mountPath: /etc/prometheus/config_out name: config-out - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0 name: prometheus-k8s-rulefiles-0 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gqzck readOnly: true nodeName: ostest-n5rnf-worker-0-j4pkp nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65534 runAsNonRoot: true runAsUser: 65534 seLinuxOptions: level: s0:c21,c0 serviceAccount: prometheus-k8s serviceAccountName: prometheus-k8s subdomain: prometheus-operated terminationGracePeriodSeconds: 600 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: prometheus-k8s-db persistentVolumeClaim: claimName: prometheus-k8s-db-prometheus-k8s-0 - name: config secret: defaultMode: 420 secretName: prometheus-k8s - name: tls-assets secret: defaultMode: 420 secretName: prometheus-k8s-tls-assets - emptyDir: {} name: config-out - configMap: defaultMode: 420 name: prometheus-k8s-rulefiles-0 name: prometheus-k8s-rulefiles-0 - name: web-config secret: defaultMode: 420 secretName: prometheus-k8s-web-config - name: secret-kube-etcd-client-certs secret: defaultMode: 420 secretName: kube-etcd-client-certs - name: secret-prometheus-k8s-tls secret: defaultMode: 420 secretName: prometheus-k8s-tls - name: secret-prometheus-k8s-proxy secret: defaultMode: 420 secretName: prometheus-k8s-proxy - name: secret-prometheus-k8s-thanos-sidecar-tls secret: defaultMode: 420 secretName: prometheus-k8s-thanos-sidecar-tls - name: secret-kube-rbac-proxy secret: defaultMode: 420 secretName: kube-rbac-proxy - name: secret-metrics-client-certs secret: defaultMode: 420 secretName: metrics-client-certs - configMap: defaultMode: 420 name: serving-certs-ca-bundle name: configmap-serving-certs-ca-bundle - configMap: defaultMode: 420 name: kubelet-serving-ca-bundle name: configmap-kubelet-serving-ca-bundle - name: secret-prometheus-k8s-htpasswd secret: defaultMode: 420 secretName: prometheus-k8s-htpasswd - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: secret-grpc-tls secret: defaultMode: 420 secretName: prometheus-k8s-grpc-tls-bg9h55jpjel3o - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: prometheus-trusted-ca-bundle-2rsonso43rc5p optional: true name: prometheus-trusted-ca-bundle - name: kube-api-access-gqzck projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:46:26Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:46:36Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:46:36Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:46:11Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://5d3320c71184e1addf19100e9b0e22b9aa5c6f32732e386a5da0abf8ace05f37 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc lastState: {} name: config-reloader ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:34Z" - containerID: cri-o://6c7642e88266e3d3f1c335f7891b27e145643cb20320fde8d209fcdb93853190 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:35Z" - containerID: cri-o://cafcf6053fe0a7b3c67ac6efb2b404448140fc54db10fca7d9c1766806ba8b75 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy-thanos ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:36Z" - containerID: cri-o://6b35ff495a60795a54256be712e5818deaa0be599b3b18b08fd8f1e71bb1ec5d image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 lastState: {} name: prom-label-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:36Z" - containerID: cri-o://3a414883c35b3e87c2c09f3b2b8867fcd0df66eee9f93187703e5085f8c10893 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf lastState: {} name: prometheus ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:34Z" - containerID: cri-o://a6923b8b95f035a65451e210e99b45c952f45b15c804d56f24f7eb1b32e60fba image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 lastState: {} name: prometheus-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:35Z" - containerID: cri-o://f5cb2ce835f8fbed36917a4b3c532c1fcc1637ab0821627a665e3d1f9c366ef1 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 lastState: {} name: thanos-sidecar ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:35Z" hostIP: 10.196.0.199 initContainerStatuses: - containerID: cri-o://9815cb281e70c2da417d073b1078853225e5b302c85f2121225a9351d61a913a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc lastState: {} name: init-config-reloader ready: true restartCount: 0 state: terminated: containerID: cri-o://9815cb281e70c2da417d073b1078853225e5b302c85f2121225a9351d61a913a exitCode: 0 finishedAt: "2022-10-11T16:46:25Z" reason: Completed startedAt: "2022-10-11T16:46:25Z" phase: Running podIP: 10.128.23.18 podIPs: - ip: 10.128.23.18 qosClass: Burstable startTime: "2022-10-11T16:46:11Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.35" ], "mac": "fa:16:3e:94:4b:ef", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.35" ], "mac": "fa:16:3e:94:4b:ef", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: prometheus openshift.io/scc: nonroot creationTimestamp: "2022-10-11T16:46:10Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: prometheus-k8s- labels: app: prometheus app.kubernetes.io/component: prometheus app.kubernetes.io/instance: k8s app.kubernetes.io/managed-by: prometheus-operator app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 2.29.2 controller-revision-hash: prometheus-k8s-77f9b66476 operator.prometheus.io/name: k8s operator.prometheus.io/shard: "0" prometheus: k8s statefulset.kubernetes.io/pod-name: prometheus-k8s-1 name: prometheus-k8s-1 namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: StatefulSet name: prometheus-k8s uid: 0cf40d35-afcd-411c-af5e-48a33a70f1b0 resourceVersion: "68476" uid: 50ef3ad7-a34a-4c5d-b2ee-d866e3e2733e spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: prometheus app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: openshift-monitoring prometheus: k8s namespaces: - openshift-monitoring topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - --web.console.templates=/etc/prometheus/consoles - --web.console.libraries=/etc/prometheus/console_libraries - --config.file=/etc/prometheus/config_out/prometheus.env.yaml - --storage.tsdb.path=/prometheus - --storage.tsdb.retention.time=15d - --web.enable-lifecycle - --web.external-url=https://prometheus-k8s-openshift-monitoring.apps.ostest.shiftstack.com/ - --web.route-prefix=/ - --web.listen-address=127.0.0.1:9090 - --web.config.file=/etc/prometheus/web_config/web-config.yaml image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf imagePullPolicy: IfNotPresent name: prometheus readinessProbe: exec: command: - sh - -c - if [ -x "$(command -v curl)" ]; then exec curl http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi failureThreshold: 120 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 3 resources: requests: cpu: 70m memory: 1Gi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/pki/ca-trust/extracted/pem/ name: prometheus-trusted-ca-bundle readOnly: true - mountPath: /etc/prometheus/config_out name: config-out readOnly: true - mountPath: /etc/prometheus/certs name: tls-assets readOnly: true - mountPath: /prometheus name: prometheus-k8s-db subPath: prometheus-db - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0 name: prometheus-k8s-rulefiles-0 - mountPath: /etc/prometheus/web_config/web-config.yaml name: web-config readOnly: true subPath: web-config.yaml - mountPath: /etc/prometheus/secrets/kube-etcd-client-certs name: secret-kube-etcd-client-certs readOnly: true - mountPath: /etc/prometheus/secrets/prometheus-k8s-tls name: secret-prometheus-k8s-tls readOnly: true - mountPath: /etc/prometheus/secrets/prometheus-k8s-proxy name: secret-prometheus-k8s-proxy readOnly: true - mountPath: /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls name: secret-prometheus-k8s-thanos-sidecar-tls readOnly: true - mountPath: /etc/prometheus/secrets/kube-rbac-proxy name: secret-kube-rbac-proxy readOnly: true - mountPath: /etc/prometheus/secrets/metrics-client-certs name: secret-metrics-client-certs readOnly: true - mountPath: /etc/prometheus/configmaps/serving-certs-ca-bundle name: configmap-serving-certs-ca-bundle readOnly: true - mountPath: /etc/prometheus/configmaps/kubelet-serving-ca-bundle name: configmap-kubelet-serving-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qqxsv readOnly: true - args: - --listen-address=localhost:8080 - --reload-url=http://localhost:9090/-/reload - --config-file=/etc/prometheus/config/prometheus.yaml.gz - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 command: - /bin/prometheus-config-reloader env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SHARD value: "0" image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imagePullPolicy: IfNotPresent name: config-reloader resources: requests: cpu: 1m memory: 10Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/prometheus/config name: config - mountPath: /etc/prometheus/config_out name: config-out - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0 name: prometheus-k8s-rulefiles-0 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qqxsv readOnly: true - args: - sidecar - --prometheus.url=http://localhost:9090/ - --tsdb.path=/prometheus - --grpc-address=[$(POD_IP)]:10901 - --http-address=127.0.0.1:10902 - --grpc-server-tls-cert=/etc/tls/grpc/server.crt - --grpc-server-tls-key=/etc/tls/grpc/server.key - --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt env: - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 imagePullPolicy: IfNotPresent name: thanos-sidecar ports: - containerPort: 10902 name: http protocol: TCP - containerPort: 10901 name: grpc protocol: TCP resources: requests: cpu: 1m memory: 25Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/grpc name: secret-grpc-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qqxsv readOnly: true - args: - -provider=openshift - -https-address=:9091 - -http-address= - -email-domain=* - -upstream=http://localhost:9090 - -openshift-service-account=prometheus-k8s - '-openshift-sar={"resource": "namespaces", "verb": "get"}' - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}' - -tls-cert=/etc/tls/private/tls.crt - -tls-key=/etc/tls/private/tls.key - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token - -cookie-secret-file=/etc/proxy/secrets/session_secret - -openshift-ca=/etc/pki/tls/cert.pem - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt - -htpasswd-file=/etc/proxy/htpasswd/auth env: - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imagePullPolicy: IfNotPresent name: prometheus-proxy ports: - containerPort: 9091 name: web protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-prometheus-k8s-tls - mountPath: /etc/proxy/secrets name: secret-prometheus-k8s-proxy - mountPath: /etc/proxy/htpasswd name: secret-prometheus-k8s-htpasswd - mountPath: /etc/pki/ca-trust/extracted/pem/ name: prometheus-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qqxsv readOnly: true - args: - --secure-listen-address=0.0.0.0:9092 - --upstream=http://127.0.0.1:9095 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --v=10 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9092 name: tenancy protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-prometheus-k8s-tls - mountPath: /etc/kube-rbac-proxy name: secret-kube-rbac-proxy - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qqxsv readOnly: true - args: - --insecure-listen-address=127.0.0.1:9095 - --upstream=http://127.0.0.1:9090 - --label=namespace image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imagePullPolicy: IfNotPresent name: prom-label-proxy resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qqxsv readOnly: true - args: - --secure-listen-address=[$(POD_IP)]:10902 - --upstream=http://127.0.0.1:10902 - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --allow-paths=/metrics - --logtostderr=true - --client-ca-file=/etc/tls/client/client-ca.crt env: - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy-thanos ports: - containerPort: 10902 name: thanos-proxy protocol: TCP resources: requests: cpu: 1m memory: 10Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-prometheus-k8s-thanos-sidecar-tls - mountPath: /etc/tls/client name: metrics-client-ca readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qqxsv readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: prometheus-k8s-1 imagePullSecrets: - name: prometheus-k8s-dockercfg-f5qm8 initContainers: - args: - --watch-interval=0 - --listen-address=:8080 - --config-file=/etc/prometheus/config/prometheus.yaml.gz - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 command: - /bin/prometheus-config-reloader env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SHARD value: "0" image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imagePullPolicy: IfNotPresent name: init-config-reloader resources: requests: cpu: 100m memory: 50Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/prometheus/config name: config - mountPath: /etc/prometheus/config_out name: config-out - mountPath: /etc/prometheus/rules/prometheus-k8s-rulefiles-0 name: prometheus-k8s-rulefiles-0 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qqxsv readOnly: true nodeName: ostest-n5rnf-worker-0-8kq82 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65534 runAsNonRoot: true runAsUser: 65534 seLinuxOptions: level: s0:c21,c0 serviceAccount: prometheus-k8s serviceAccountName: prometheus-k8s subdomain: prometheus-operated terminationGracePeriodSeconds: 600 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: prometheus-k8s-db persistentVolumeClaim: claimName: prometheus-k8s-db-prometheus-k8s-1 - name: config secret: defaultMode: 420 secretName: prometheus-k8s - name: tls-assets secret: defaultMode: 420 secretName: prometheus-k8s-tls-assets - emptyDir: {} name: config-out - configMap: defaultMode: 420 name: prometheus-k8s-rulefiles-0 name: prometheus-k8s-rulefiles-0 - name: web-config secret: defaultMode: 420 secretName: prometheus-k8s-web-config - name: secret-kube-etcd-client-certs secret: defaultMode: 420 secretName: kube-etcd-client-certs - name: secret-prometheus-k8s-tls secret: defaultMode: 420 secretName: prometheus-k8s-tls - name: secret-prometheus-k8s-proxy secret: defaultMode: 420 secretName: prometheus-k8s-proxy - name: secret-prometheus-k8s-thanos-sidecar-tls secret: defaultMode: 420 secretName: prometheus-k8s-thanos-sidecar-tls - name: secret-kube-rbac-proxy secret: defaultMode: 420 secretName: kube-rbac-proxy - name: secret-metrics-client-certs secret: defaultMode: 420 secretName: metrics-client-certs - configMap: defaultMode: 420 name: serving-certs-ca-bundle name: configmap-serving-certs-ca-bundle - configMap: defaultMode: 420 name: kubelet-serving-ca-bundle name: configmap-kubelet-serving-ca-bundle - name: secret-prometheus-k8s-htpasswd secret: defaultMode: 420 secretName: prometheus-k8s-htpasswd - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: secret-grpc-tls secret: defaultMode: 420 secretName: prometheus-k8s-grpc-tls-bg9h55jpjel3o - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: prometheus-trusted-ca-bundle-2rsonso43rc5p optional: true name: prometheus-trusted-ca-bundle - name: kube-api-access-qqxsv projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:46:31Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:46:57Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:46:57Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:46:11Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://8f1de870d2f059356e38367f619aa070b2784584fd75705867ea64fbd0e41e46 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc lastState: {} name: config-reloader ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:41Z" - containerID: cri-o://c375c94f8370593926824bdf14898b7fbabf403375bbedd3f399502fbcf51adc image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:48Z" - containerID: cri-o://7780a1ec4a1b9561b06dc659c72b488406246bf2ba470d9e3190e650af070647 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy-thanos ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:56Z" - containerID: cri-o://1e75a55b09ea279ec7878c3b3fb2dbbcc9771651400c64368240fe20effe7d95 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 lastState: {} name: prom-label-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:56Z" - containerID: cri-o://ff98d8a8604e6b4fd133088201e63266e8d65eef437dacd10abd3db0f68df31a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf lastState: {} name: prometheus ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:41Z" - containerID: cri-o://7f58ea7cc403c27cdff172c8e8fda71659bd03f3474f139d85f5f707abe55558 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 lastState: {} name: prometheus-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:48Z" - containerID: cri-o://05008e4f94d89864fe153ff8d78f28477f7a39b049faf05bb0f60f6472fc27f2 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 lastState: {} name: thanos-sidecar ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:46:48Z" hostIP: 10.196.2.72 initContainerStatuses: - containerID: cri-o://2b6bef26018b326930cad08bb9d3b8b0c61609a26327e0b8383a5ffbcca91d4c image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc lastState: {} name: init-config-reloader ready: true restartCount: 0 state: terminated: containerID: cri-o://2b6bef26018b326930cad08bb9d3b8b0c61609a26327e0b8383a5ffbcca91d4c exitCode: 0 finishedAt: "2022-10-11T16:46:30Z" reason: Completed startedAt: "2022-10-11T16:46:30Z" phase: Running podIP: 10.128.23.35 podIPs: - ip: 10.128.23.35 qosClass: Burstable startTime: "2022-10-11T16:46:12Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.177" ], "mac": "fa:16:3e:1a:10:dc", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.177" ], "mac": "fa:16:3e:1a:10:dc", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: prometheus-operator openshift.io/scc: restricted creationTimestamp: "2022-10-11T16:14:10Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: prometheus-operator-7bcc4bcc6b- labels: app.kubernetes.io/component: controller app.kubernetes.io/name: prometheus-operator app.kubernetes.io/part-of: openshift-monitoring app.kubernetes.io/version: 0.49.0 pod-template-hash: 7bcc4bcc6b name: prometheus-operator-7bcc4bcc6b-zlbgw namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: prometheus-operator-7bcc4bcc6b uid: 254d5a3d-70e9-4382-86c9-e36660822831 resourceVersion: "6842" uid: 4a35c240-ec54-45e3-b1a8-5efe98a87928 spec: containers: - args: - --kubelet-service=kube-system/kubelet - --prometheus-config-reloader=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc - --prometheus-instance-namespaces=openshift-monitoring - --thanos-ruler-instance-namespaces=openshift-monitoring - --alertmanager-instance-namespaces=openshift-monitoring - --config-reloader-cpu-limit=0 - --config-reloader-memory-limit=0 - --web.enable-tls=true - --web.tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --web.tls-min-version=VersionTLS12 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62caff9b13ff229d124b2cb633699775684a348b573f6a6f07bd6f4039b7b0f5 imagePullPolicy: IfNotPresent name: prometheus-operator ports: - containerPort: 8080 name: http protocol: TCP resources: requests: cpu: 5m memory: 150Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: prometheus-operator-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rx5sv readOnly: true - args: - --logtostderr - --secure-listen-address=:8443 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --upstream=https://prometheus-operator.openshift-monitoring.svc:8080/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --client-ca-file=/etc/tls/client/client-ca.crt - --upstream-ca-file=/etc/configmaps/operator-cert-ca-bundle/service-ca.crt image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 8443 name: https protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: prometheus-operator-tls - mountPath: /etc/configmaps/operator-cert-ca-bundle name: operator-certs-ca-bundle - mountPath: /etc/tls/client name: metrics-client-ca - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rx5sv readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ostest-n5rnf-master-2 nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/master: "" preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: prometheus-operator serviceAccountName: prometheus-operator terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: prometheus-operator-tls secret: defaultMode: 420 secretName: prometheus-operator-tls - configMap: defaultMode: 420 name: operator-certs-ca-bundle name: operator-certs-ca-bundle - configMap: defaultMode: 420 name: metrics-client-ca name: metrics-client-ca - name: kube-api-access-rx5sv projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:14:10Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:14:57Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:14:57Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:14:10Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://016fcc07cea03929733c6cf2f74aa7648f3e3e72666bc6ae0e8ccef82359f4be image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: terminated: containerID: cri-o://b43d2ab6d990fc3d6b51170adf95df512a430046b85bea281292d41eb82963b0 exitCode: 255 finishedAt: "2022-10-11T16:14:55Z" message: "imachinery/pkg/util/wait/wait.go:133 +0x98\nk8s.io/apimachinery/pkg/util/wait.Until(0xc000390050, 0x3b9aca00, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d\ncreated by k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:171 +0x245\n\ngoroutine 35 [select]:\nk8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0004c0000, 0xc000390070, 0xc00009c120, 0x0, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0xf1\nk8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc000390070, 0x0, 0x0, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc000390070, 0x0, 0xb, 0xc000123f20)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb0\ncreated by k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:174 +0x2b3\n\ngoroutine 36 [select]:\nk8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0x0, 0xc0003900b0, 0x1932f58, 0xc000474000)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0x87\ncreated by k8s.io/apimachinery/pkg/util/wait.contextForChannel\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c\n\ngoroutine 37 [select]:\nk8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00009c480, 0xdf8475800, 0x0, 0xc00009c300)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x135\ncreated by k8s.io/apimachinery/pkg/util/wait.poller.func1\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c\n" reason: Error startedAt: "2022-10-11T16:14:55Z" name: kube-rbac-proxy ready: true restartCount: 1 started: true state: running: startedAt: "2022-10-11T16:14:56Z" - containerID: cri-o://02fe220c4e55596fecf911246d99d3117df987bfde39598aa58e23feb0aa0fd8 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62caff9b13ff229d124b2cb633699775684a348b573f6a6f07bd6f4039b7b0f5 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62caff9b13ff229d124b2cb633699775684a348b573f6a6f07bd6f4039b7b0f5 lastState: {} name: prometheus-operator ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:14:49Z" hostIP: 10.196.3.187 phase: Running podIP: 10.128.22.177 podIPs: - ip: 10.128.22.177 qosClass: Burstable startTime: "2022-10-11T16:14:10Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.239" ], "mac": "fa:16:3e:1a:7a:87", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.22.239" ], "mac": "fa:16:3e:1a:7a:87", "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2022-10-11T16:15:04Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: telemeter-client-6d8969b4bf- labels: k8s-app: telemeter-client pod-template-hash: 6d8969b4bf name: telemeter-client-6d8969b4bf-dffrt namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: telemeter-client-6d8969b4bf uid: 3001942a-2802-482d-a134-f89d1cf69fb9 resourceVersion: "61502" uid: 4910b4f1-5eb2-45e5-9d80-09f1aed4537c spec: containers: - command: - /usr/bin/telemeter-client - --id=$(ID) - --from=$(FROM) - --from-ca-file=/etc/serving-certs-ca-bundle/service-ca.crt - --from-token-file=/var/run/secrets/kubernetes.io/serviceaccount/token - --to=$(TO) - --to-token-file=/etc/telemeter/token - --listen=localhost:8080 - --anonymize-salt-file=/etc/telemeter/salt - --anonymize-labels=$(ANONYMIZE_LABELS) - --match={__name__=~"cluster:usage:.*"} - --match={__name__="count:up0"} - --match={__name__="count:up1"} - --match={__name__="cluster_version"} - --match={__name__="cluster_version_available_updates"} - --match={__name__="cluster_operator_up"} - --match={__name__="cluster_operator_conditions"} - --match={__name__="cluster_version_payload"} - --match={__name__="cluster_installer"} - --match={__name__="cluster_infrastructure_provider"} - --match={__name__="cluster_feature_set"} - --match={__name__="instance:etcd_object_counts:sum"} - --match={__name__="ALERTS",alertstate="firing"} - --match={__name__="code:apiserver_request_total:rate:sum"} - --match={__name__="cluster:capacity_cpu_cores:sum"} - --match={__name__="cluster:capacity_memory_bytes:sum"} - --match={__name__="cluster:cpu_usage_cores:sum"} - --match={__name__="cluster:memory_usage_bytes:sum"} - --match={__name__="openshift:cpu_usage_cores:sum"} - --match={__name__="openshift:memory_usage_bytes:sum"} - --match={__name__="workload:cpu_usage_cores:sum"} - --match={__name__="workload:memory_usage_bytes:sum"} - --match={__name__="cluster:virt_platform_nodes:sum"} - --match={__name__="cluster:node_instance_type_count:sum"} - --match={__name__="cnv:vmi_status_running:count"} - --match={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"} - --match={__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"} - --match={__name__="subscription_sync_total"} - --match={__name__="olm_resolution_duration_seconds"} - --match={__name__="csv_succeeded"} - --match={__name__="csv_abnormal"} - --match={__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"} - --match={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"} - --match={__name__="ceph_cluster_total_bytes"} - --match={__name__="ceph_cluster_total_used_raw_bytes"} - --match={__name__="ceph_health_status"} - --match={__name__="job:ceph_osd_metadata:count"} - --match={__name__="job:kube_pv:count"} - --match={__name__="job:ceph_pools_iops:total"} - --match={__name__="job:ceph_pools_iops_bytes:total"} - --match={__name__="job:ceph_versions_running:count"} - --match={__name__="job:noobaa_total_unhealthy_buckets:sum"} - --match={__name__="job:noobaa_bucket_count:sum"} - --match={__name__="job:noobaa_total_object_count:sum"} - --match={__name__="noobaa_accounts_num"} - --match={__name__="noobaa_total_usage"} - --match={__name__="console_url"} - --match={__name__="cluster:network_attachment_definition_instances:max"} - --match={__name__="cluster:network_attachment_definition_enabled_instance_up:max"} - --match={__name__="insightsclient_request_send_total"} - --match={__name__="cam_app_workload_migrations"} - --match={__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"} - --match={__name__="cluster:alertmanager_integrations:max"} - --match={__name__="cluster:telemetry_selected_series:count"} - --match={__name__="openshift:prometheus_tsdb_head_series:sum"} - --match={__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"} - --match={__name__="monitoring:container_memory_working_set_bytes:sum"} - --match={__name__="namespace_job:scrape_series_added:topk3_sum1h"} - --match={__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"} - --match={__name__="monitoring:haproxy_server_http_responses_total:sum"} - --match={__name__="rhmi_status"} - --match={__name__="cluster_legacy_scheduler_policy"} - --match={__name__="cluster_master_schedulable"} - --match={__name__="che_workspace_status"} - --match={__name__="che_workspace_started_total"} - --match={__name__="che_workspace_failure_total"} - --match={__name__="che_workspace_start_time_seconds_sum"} - --match={__name__="che_workspace_start_time_seconds_count"} - --match={__name__="cco_credentials_mode"} - --match={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"} - --match={__name__="visual_web_terminal_sessions_total"} - --match={__name__="acm_managed_cluster_info"} - --match={__name__="cluster:vsphere_vcenter_info:sum"} - --match={__name__="cluster:vsphere_esxi_version_total:sum"} - --match={__name__="cluster:vsphere_node_hw_version_total:sum"} - --match={__name__="openshift:build_by_strategy:sum"} - --match={__name__="rhods_aggregate_availability"} - --match={__name__="rhods_total_users"} - --match={__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"} - --match={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"} - --match={__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99"} - --match={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"} - --match={__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.99"} - --match={__name__="jaeger_operator_instances_storage_types"} - --match={__name__="jaeger_operator_instances_strategies"} - --match={__name__="jaeger_operator_instances_agent_strategies"} - --match={__name__="appsvcs:cores_by_product:sum"} - --match={__name__="nto_custom_profiles:count"} - --limit-bytes=5242880 env: - name: ANONYMIZE_LABELS - name: FROM value: https://prometheus-k8s.openshift-monitoring.svc:9091 - name: ID value: e65548fc-bd07-47dc-b550-8a4fa01dead9 - name: TO value: https://infogw.api.openshift.com/ - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a3f86f1b302389d805f18271a6d00cb2e8b6e9c4a859f9f20aa6d0c4f574371 imagePullPolicy: IfNotPresent name: telemeter-client ports: - containerPort: 8080 name: http protocol: TCP resources: requests: cpu: 1m memory: 40Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/serving-certs-ca-bundle name: serving-certs-ca-bundle - mountPath: /etc/telemeter name: secret-telemeter-client - mountPath: /etc/pki/ca-trust/extracted/pem/ name: telemeter-trusted-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ds46w readOnly: true - args: - --reload-url=http://localhost:8080/-/reload - --watched-dir=/etc/serving-certs-ca-bundle image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imagePullPolicy: IfNotPresent name: reload resources: requests: cpu: 1m memory: 10Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/serving-certs-ca-bundle name: serving-certs-ca-bundle - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ds46w readOnly: true - args: - --secure-listen-address=:8443 - --upstream=http://127.0.0.1:8080/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 8443 name: https protocol: TCP resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/tls/private name: telemeter-client-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ds46w readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: telemeter-client serviceAccountName: telemeter-client terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - configMap: defaultMode: 420 name: telemeter-client-serving-certs-ca-bundle name: serving-certs-ca-bundle - name: secret-telemeter-client secret: defaultMode: 420 secretName: telemeter-client - name: telemeter-client-tls secret: defaultMode: 420 secretName: telemeter-client-tls - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: telemeter-trusted-ca-bundle-2rsonso43rc5p optional: true name: telemeter-trusted-ca-bundle - name: kube-api-access-ds46w projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:52Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:49Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:49Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:29:52Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://e49a5cc7978570f2d6c8c603c5dbb15ec57c271cd360efb0636b1e06d70757b2 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:48Z" - containerID: cri-o://499f1362b275ac07fcb7ae4e1ee1445b83c5e3d5b5fc85ab29a58c66a1bdba7c image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc lastState: {} name: reload ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:48Z" - containerID: cri-o://111972f6103805475ef9e6d819a3e32bb4ec63154f6b25c5049a1e7a1667db81 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a3f86f1b302389d805f18271a6d00cb2e8b6e9c4a859f9f20aa6d0c4f574371 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a3f86f1b302389d805f18271a6d00cb2e8b6e9c4a859f9f20aa6d0c4f574371 lastState: {} name: telemeter-client ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:36Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.22.239 podIPs: - ip: 10.128.22.239 qosClass: Burstable startTime: "2022-10-11T16:29:52Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.183" ], "mac": "fa:16:3e:c3:a9:de", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.183" ], "mac": "fa:16:3e:c3:a9:de", "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2022-10-11T16:30:12Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: thanos-querier-6699db6d95- labels: app.kubernetes.io/component: query-layer app.kubernetes.io/instance: thanos-querier app.kubernetes.io/name: thanos-query app.kubernetes.io/version: 0.22.0 pod-template-hash: 6699db6d95 name: thanos-querier-6699db6d95-42mpw namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: thanos-querier-6699db6d95 uid: 3dc07169-b785-4638-bae8-477acf441d9f resourceVersion: "61844" uid: 6987d5e8-4a23-49ad-ab57-6240ef3c4bd7 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: query-layer app.kubernetes.io/instance: thanos-querier app.kubernetes.io/name: thanos-query topologyKey: kubernetes.io/hostname containers: - args: - query - --grpc-address=127.0.0.1:10901 - --http-address=127.0.0.1:9090 - --log.format=logfmt - --query.replica-label=prometheus_replica - --query.replica-label=thanos_ruler_replica - --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local - --query.auto-downsampling - --store.sd-dns-resolver=miekgdns - --grpc-client-tls-secure - --grpc-client-tls-cert=/etc/tls/grpc/client.crt - --grpc-client-tls-key=/etc/tls/grpc/client.key - --grpc-client-tls-ca=/etc/tls/grpc/ca.crt - --grpc-client-server-name=prometheus-grpc - --rule=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local - --target=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local env: - name: HOST_IP_ADDRESS valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 imagePullPolicy: IfNotPresent name: thanos-query ports: - containerPort: 9090 name: http protocol: TCP resources: requests: cpu: 10m memory: 12Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/grpc name: secret-grpc-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ml55t readOnly: true - args: - -provider=openshift - -https-address=:9091 - -http-address= - -email-domain=* - -upstream=http://localhost:9090 - -openshift-service-account=thanos-querier - '-openshift-sar={"resource": "namespaces", "verb": "get"}' - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}' - -tls-cert=/etc/tls/private/tls.crt - -tls-key=/etc/tls/private/tls.key - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token - -cookie-secret-file=/etc/proxy/secrets/session_secret - -openshift-ca=/etc/pki/tls/cert.pem - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt - -bypass-auth-for=^/-/(healthy|ready)$ - -htpasswd-file=/etc/proxy/htpasswd/auth env: - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 4 httpGet: path: /-/healthy port: 9091 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 1 name: oauth-proxy ports: - containerPort: 9091 name: web protocol: TCP readinessProbe: failureThreshold: 20 httpGet: path: /-/ready port: 9091 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-thanos-querier-tls - mountPath: /etc/proxy/secrets name: secret-thanos-querier-oauth-cookie - mountPath: /etc/pki/ca-trust/extracted/pem/ name: thanos-querier-trusted-ca-bundle readOnly: true - mountPath: /etc/proxy/htpasswd name: secret-thanos-querier-oauth-htpasswd - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ml55t readOnly: true - args: - --secure-listen-address=0.0.0.0:9092 - --upstream=http://127.0.0.1:9095 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --allow-paths=/api/v1/query,/api/v1/query_range,/api/v1/labels,/api/v1/label/*/values,/api/v1/series image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9092 name: tenancy protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-thanos-querier-tls - mountPath: /etc/kube-rbac-proxy name: secret-thanos-querier-kube-rbac-proxy - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ml55t readOnly: true - args: - --insecure-listen-address=127.0.0.1:9095 - --upstream=http://127.0.0.1:9090 - --label=namespace - --enable-label-apis image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imagePullPolicy: IfNotPresent name: prom-label-proxy resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ml55t readOnly: true - args: - --secure-listen-address=0.0.0.0:9093 - --upstream=http://127.0.0.1:9095 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --allow-paths=/api/v1/rules image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy-rules ports: - containerPort: 9093 name: tenancy-rules protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-thanos-querier-tls - mountPath: /etc/kube-rbac-proxy name: secret-thanos-querier-kube-rbac-proxy-rules - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ml55t readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: thanos-querier-dockercfg-pphnw nodeName: ostest-n5rnf-worker-0-j4pkp nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: thanos-querier serviceAccountName: thanos-querier terminationGracePeriodSeconds: 120 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: secret-thanos-querier-tls secret: defaultMode: 420 secretName: thanos-querier-tls - name: secret-thanos-querier-oauth-cookie secret: defaultMode: 420 secretName: thanos-querier-oauth-cookie - name: secret-thanos-querier-kube-rbac-proxy secret: defaultMode: 420 secretName: thanos-querier-kube-rbac-proxy - name: secret-thanos-querier-kube-rbac-proxy-rules secret: defaultMode: 420 secretName: thanos-querier-kube-rbac-proxy-rules - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: thanos-querier-trusted-ca-bundle-2rsonso43rc5p optional: true name: thanos-querier-trusted-ca-bundle - name: secret-thanos-querier-oauth-htpasswd secret: defaultMode: 420 secretName: thanos-querier-oauth-htpasswd - name: secret-grpc-tls secret: defaultMode: 420 secretName: thanos-querier-grpc-tls-ejqjssqja76hi - name: kube-api-access-ml55t projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:32Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:09Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:09Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:32Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://3de35991ef5607ba09fd496e85cb6d709d8ee3a8d51efe3ef8b013d5d0cfd1ba image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:00Z" - containerID: cri-o://8b3ab57752f962e1d3b299ee3c96f502b63018a733766b19ab9d926ae741e562 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy-rules ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:07Z" - containerID: cri-o://73f6483090ebae1503fd394766af8a4d84cdcd65fd046367846e3bc1b3c3ff81 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 lastState: {} name: oauth-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:00Z" - containerID: cri-o://6f61c6c082a310415eac3f33fa30b330e4940e82ae1cc7e149ab73c564f4a562 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 lastState: {} name: prom-label-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:07Z" - containerID: cri-o://afc3af17ece11b17afc10a01856931a8672c7433642b2b192199a103256b621d image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 lastState: {} name: thanos-query ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:30:59Z" hostIP: 10.196.0.199 phase: Running podIP: 10.128.23.183 podIPs: - ip: 10.128.23.183 qosClass: Burstable startTime: "2022-10-11T16:30:32Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.114" ], "mac": "fa:16:3e:64:00:9b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.114" ], "mac": "fa:16:3e:64:00:9b", "default": true, "dns": {} }] openshift.io/scc: restricted creationTimestamp: "2022-10-11T16:30:12Z" finalizers: - kuryr.openstack.org/pod-finalizer generateName: thanos-querier-6699db6d95- labels: app.kubernetes.io/component: query-layer app.kubernetes.io/instance: thanos-querier app.kubernetes.io/name: thanos-query app.kubernetes.io/version: 0.22.0 pod-template-hash: 6699db6d95 name: thanos-querier-6699db6d95-cvbzq namespace: openshift-monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: thanos-querier-6699db6d95 uid: 3dc07169-b785-4638-bae8-477acf441d9f resourceVersion: "62472" uid: 95c88db1-e599-4351-8604-3655d9250791 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: query-layer app.kubernetes.io/instance: thanos-querier app.kubernetes.io/name: thanos-query topologyKey: kubernetes.io/hostname containers: - args: - query - --grpc-address=127.0.0.1:10901 - --http-address=127.0.0.1:9090 - --log.format=logfmt - --query.replica-label=prometheus_replica - --query.replica-label=thanos_ruler_replica - --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local - --query.auto-downsampling - --store.sd-dns-resolver=miekgdns - --grpc-client-tls-secure - --grpc-client-tls-cert=/etc/tls/grpc/client.crt - --grpc-client-tls-key=/etc/tls/grpc/client.key - --grpc-client-tls-ca=/etc/tls/grpc/ca.crt - --grpc-client-server-name=prometheus-grpc - --rule=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local - --target=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local env: - name: HOST_IP_ADDRESS valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 imagePullPolicy: IfNotPresent name: thanos-query ports: - containerPort: 9090 name: http protocol: TCP resources: requests: cpu: 10m memory: 12Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/grpc name: secret-grpc-tls - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ddjdg readOnly: true - args: - -provider=openshift - -https-address=:9091 - -http-address= - -email-domain=* - -upstream=http://localhost:9090 - -openshift-service-account=thanos-querier - '-openshift-sar={"resource": "namespaces", "verb": "get"}' - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}' - -tls-cert=/etc/tls/private/tls.crt - -tls-key=/etc/tls/private/tls.key - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token - -cookie-secret-file=/etc/proxy/secrets/session_secret - -openshift-ca=/etc/pki/tls/cert.pem - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt - -bypass-auth-for=^/-/(healthy|ready)$ - -htpasswd-file=/etc/proxy/htpasswd/auth env: - name: HTTP_PROXY - name: HTTPS_PROXY - name: NO_PROXY image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 4 httpGet: path: /-/healthy port: 9091 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 1 name: oauth-proxy ports: - containerPort: 9091 name: web protocol: TCP readinessProbe: failureThreshold: 20 httpGet: path: /-/ready port: 9091 scheme: HTTPS initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 1m memory: 20Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-thanos-querier-tls - mountPath: /etc/proxy/secrets name: secret-thanos-querier-oauth-cookie - mountPath: /etc/pki/ca-trust/extracted/pem/ name: thanos-querier-trusted-ca-bundle readOnly: true - mountPath: /etc/proxy/htpasswd name: secret-thanos-querier-oauth-htpasswd - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ddjdg readOnly: true - args: - --secure-listen-address=0.0.0.0:9092 - --upstream=http://127.0.0.1:9095 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --allow-paths=/api/v1/query,/api/v1/query_range,/api/v1/labels,/api/v1/label/*/values,/api/v1/series image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9092 name: tenancy protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-thanos-querier-tls - mountPath: /etc/kube-rbac-proxy name: secret-thanos-querier-kube-rbac-proxy - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ddjdg readOnly: true - args: - --insecure-listen-address=127.0.0.1:9095 - --upstream=http://127.0.0.1:9090 - --label=namespace - --enable-label-apis image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imagePullPolicy: IfNotPresent name: prom-label-proxy resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ddjdg readOnly: true - args: - --secure-listen-address=0.0.0.0:9093 - --upstream=http://127.0.0.1:9095 - --config-file=/etc/kube-rbac-proxy/config.yaml - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - --logtostderr=true - --allow-paths=/api/v1/rules image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imagePullPolicy: IfNotPresent name: kube-rbac-proxy-rules ports: - containerPort: 9093 name: tenancy-rules protocol: TCP resources: requests: cpu: 1m memory: 15Mi securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000420000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/tls/private name: secret-thanos-querier-tls - mountPath: /etc/kube-rbac-proxy name: secret-thanos-querier-kube-rbac-proxy-rules - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ddjdg readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: thanos-querier-dockercfg-pphnw nodeName: ostest-n5rnf-worker-0-94fxs nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000420000 seLinuxOptions: level: s0:c21,c0 serviceAccount: thanos-querier serviceAccountName: thanos-querier terminationGracePeriodSeconds: 120 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: secret-thanos-querier-tls secret: defaultMode: 420 secretName: thanos-querier-tls - name: secret-thanos-querier-oauth-cookie secret: defaultMode: 420 secretName: thanos-querier-oauth-cookie - name: secret-thanos-querier-kube-rbac-proxy secret: defaultMode: 420 secretName: thanos-querier-kube-rbac-proxy - name: secret-thanos-querier-kube-rbac-proxy-rules secret: defaultMode: 420 secretName: thanos-querier-kube-rbac-proxy-rules - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: thanos-querier-trusted-ca-bundle-2rsonso43rc5p optional: true name: thanos-querier-trusted-ca-bundle - name: secret-thanos-querier-oauth-htpasswd secret: defaultMode: 420 secretName: thanos-querier-oauth-htpasswd - name: secret-grpc-tls secret: defaultMode: 420 secretName: thanos-querier-grpc-tls-ejqjssqja76hi - name: kube-api-access-ddjdg projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:12Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:42Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-10-11T16:31:42Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-10-11T16:30:12Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://3925db2db4625ef59e27c39d662e21a6d627ffd9cc4d5cb107c5cfeb349d5125 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:34Z" - containerID: cri-o://29b84274309e904f9231b9f6071bd5646a0c3f7014fac86a0301d192a88f2d36 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c lastState: {} name: kube-rbac-proxy-rules ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:34Z" - containerID: cri-o://f56b1a3f2be3fa5f1619c84fc4fd6f2e761621164b9451155438257a292baa6d image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 lastState: {} name: oauth-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:33Z" - containerID: cri-o://b8e23910be357b9098e9870d53c3713a33e7dc7e57b282be451ef21488353f4b image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 lastState: {} name: prom-label-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:34Z" - containerID: cri-o://5455dbf6532b3af64140857906aacfa67bec8f76d5290eb73f737b4180a38a1a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 lastState: {} name: thanos-query ready: true restartCount: 0 started: true state: running: startedAt: "2022-10-11T16:31:33Z" hostIP: 10.196.2.169 phase: Running podIP: 10.128.23.114 podIPs: - ip: 10.128.23.114 qosClass: Burstable startTime: "2022-10-11T16:30:12Z" kind: List metadata: resourceVersion: "" selfLink: "" Oct 13 10:20:16.688: INFO: Running 'oc --kubeconfig=.kube/config describe pod/prometheus-k8s-0 -n openshift-monitoring' Oct 13 10:20:16.867: INFO: Describing pod "prometheus-k8s-0" Name: prometheus-k8s-0 Namespace: openshift-monitoring Priority: 2000000000 Priority Class Name: system-cluster-critical Node: ostest-n5rnf-worker-0-j4pkp/10.196.0.199 Start Time: Tue, 11 Oct 2022 16:46:11 +0000 Labels: app=prometheus app.kubernetes.io/component=prometheus app.kubernetes.io/instance=k8s app.kubernetes.io/managed-by=prometheus-operator app.kubernetes.io/name=prometheus app.kubernetes.io/part-of=openshift-monitoring app.kubernetes.io/version=2.29.2 controller-revision-hash=prometheus-k8s-77f9b66476 operator.prometheus.io/name=k8s operator.prometheus.io/shard=0 prometheus=k8s statefulset.kubernetes.io/pod-name=prometheus-k8s-0 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.18" ], "mac": "fa:16:3e:ff:39:16", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "kuryr", "interface": "eth0", "ips": [ "10.128.23.18" ], "mac": "fa:16:3e:ff:39:16", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: prometheus openshift.io/scc: nonroot Status: Running IP: 10.128.23.18 IPs: IP: 10.128.23.18 Controlled By: StatefulSet/prometheus-k8s Init Containers: init-config-reloader: Container ID: cri-o://9815cb281e70c2da417d073b1078853225e5b302c85f2121225a9351d61a913a Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc Port: <none> Host Port: <none> Command: /bin/prometheus-config-reloader Args: --watch-interval=0 --listen-address=:8080 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 11 Oct 2022 16:46:25 +0000 Finished: Tue, 11 Oct 2022 16:46:25 +0000 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 50Mi Environment: POD_NAME: prometheus-k8s-0 (v1:metadata.name) SHARD: 0 Mounts: /etc/prometheus/config from config (rw) /etc/prometheus/config_out from config-out (rw) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro) Containers: prometheus: Container ID: cri-o://3a414883c35b3e87c2c09f3b2b8867fcd0df66eee9f93187703e5085f8c10893 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15900044237a6b875c27d642311afb5d5414af936cb74248219db44394ea44cf Port: <none> Host Port: <none> Args: --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries --config.file=/etc/prometheus/config_out/prometheus.env.yaml --storage.tsdb.path=/prometheus --storage.tsdb.retention.time=15d --web.enable-lifecycle --web.external-url=https://prometheus-k8s-openshift-monitoring.apps.ostest.shiftstack.com/ --web.route-prefix=/ --web.listen-address=127.0.0.1:9090 --web.config.file=/etc/prometheus/web_config/web-config.yaml State: Running Started: Tue, 11 Oct 2022 16:46:34 +0000 Ready: True Restart Count: 0 Requests: cpu: 70m memory: 1Gi Readiness: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=120 Environment: <none> Mounts: /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro) /etc/prometheus/certs from tls-assets (ro) /etc/prometheus/config_out from config-out (ro) /etc/prometheus/configmaps/kubelet-serving-ca-bundle from configmap-kubelet-serving-ca-bundle (ro) /etc/prometheus/configmaps/serving-certs-ca-bundle from configmap-serving-certs-ca-bundle (ro) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /etc/prometheus/secrets/kube-etcd-client-certs from secret-kube-etcd-client-certs (ro) /etc/prometheus/secrets/kube-rbac-proxy from secret-kube-rbac-proxy (ro) /etc/prometheus/secrets/metrics-client-certs from secret-metrics-client-certs (ro) /etc/prometheus/secrets/prometheus-k8s-proxy from secret-prometheus-k8s-proxy (ro) /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls from secret-prometheus-k8s-thanos-sidecar-tls (ro) /etc/prometheus/secrets/prometheus-k8s-tls from secret-prometheus-k8s-tls (ro) /etc/prometheus/web_config/web-config.yaml from web-config (ro,path="web-config.yaml") /prometheus from prometheus-k8s-db (rw,path="prometheus-db") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro) config-reloader: Container ID: cri-o://5d3320c71184e1addf19100e9b0e22b9aa5c6f32732e386a5da0abf8ace05f37 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c35e9c1a3908ebcaca14e3c525dc24c87337487dcb5ad393e7354ba867c7cdc Port: <none> Host Port: <none> Command: /bin/prometheus-config-reloader Args: --listen-address=localhost:8080 --reload-url=http://localhost:9090/-/reload --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 State: Running Started: Tue, 11 Oct 2022 16:46:34 +0000 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: prometheus-k8s-0 (v1:metadata.name) SHARD: 0 Mounts: /etc/prometheus/config from config (rw) /etc/prometheus/config_out from config-out (rw) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro) thanos-sidecar: Container ID: cri-o://f5cb2ce835f8fbed36917a4b3c532c1fcc1637ab0821627a665e3d1f9c366ef1 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a52d2872092390af7422d6b0dc0cf186f21969e6ed3c05f1cdd4286e59b25247 Ports: 10902/TCP, 10901/TCP Host Ports: 0/TCP, 0/TCP Args: sidecar --prometheus.url=http://localhost:9090/ --tsdb.path=/prometheus --grpc-address=[$(POD_IP)]:10901 --http-address=127.0.0.1:10902 --grpc-server-tls-cert=/etc/tls/grpc/server.crt --grpc-server-tls-key=/etc/tls/grpc/server.key --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt State: Running Started: Tue, 11 Oct 2022 16:46:35 +0000 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 25Mi Environment: POD_IP: (v1:status.podIP) Mounts: /etc/tls/grpc from secret-grpc-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro) prometheus-proxy: Container ID: cri-o://a6923b8b95f035a65451e210e99b45c952f45b15c804d56f24f7eb1b32e60fba Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:236c0040758a5571a598a27bc7110fcc91a0b600f1b5d2b2211df618e8bcbf37 Port: 9091/TCP Host Port: 0/TCP Args: -provider=openshift -https-address=:9091 -http-address= -email-domain=* -upstream=http://localhost:9090 -openshift-service-account=prometheus-k8s -openshift-sar={"resource": "namespaces", "verb": "get"} -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}} -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-ca=/etc/pki/tls/cert.pem -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt -htpasswd-file=/etc/proxy/htpasswd/auth State: Running Started: Tue, 11 Oct 2022 16:46:35 +0000 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro) /etc/proxy/htpasswd from secret-prometheus-k8s-htpasswd (rw) /etc/proxy/secrets from secret-prometheus-k8s-proxy (rw) /etc/tls/private from secret-prometheus-k8s-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro) kube-rbac-proxy: Container ID: cri-o://6c7642e88266e3d3f1c335f7891b27e145643cb20320fde8d209fcdb93853190 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c Port: 9092/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9092 --upstream=http://127.0.0.1:9095 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 --logtostderr=true --v=10 State: Running Started: Tue, 11 Oct 2022 16:46:35 +0000 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw) /etc/tls/private from secret-prometheus-k8s-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro) prom-label-proxy: Container ID: cri-o://6b35ff495a60795a54256be712e5818deaa0be599b3b18b08fd8f1e71bb1ec5d Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8ce784b0918999a413c417d59c84b8a2bf3955413d285c5b2e93dea1c9da60 Port: <none> Host Port: <none> Args: --insecure-listen-address=127.0.0.1:9095 --upstream=http://127.0.0.1:9090 --label=namespace State: Running Started: Tue, 11 Oct 2022 16:46:36 +0000 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro) kube-rbac-proxy-thanos: Container ID: cri-o://cafcf6053fe0a7b3c67ac6efb2b404448140fc54db10fca7d9c1766806ba8b75 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d63be14c9478a29c9d6631d11ff5f0fba4bb052bce81bf186c6e1c0578442c Port: 10902/TCP Host Port: 0/TCP Args: --secure-listen-address=[$(POD_IP)]:10902 --upstream=http://127.0.0.1:10902 --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 --allow-paths=/metrics --logtostderr=true --client-ca-file=/etc/tls/client/client-ca.crt State: Running Started: Tue, 11 Oct 2022 16:46:36 +0000 Ready: True Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_IP: (v1:status.podIP) Mounts: /etc/tls/client from metrics-client-ca (ro) /etc/tls/private from secret-prometheus-k8s-thanos-sidecar-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqzck (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: prometheus-k8s-db: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: prometheus-k8s-db-prometheus-k8s-0 ReadOnly: false config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s Optional: false tls-assets: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-tls-assets Optional: false config-out: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> prometheus-k8s-rulefiles-0: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-k8s-rulefiles-0 Optional: false web-config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-web-config Optional: false secret-kube-etcd-client-certs: Type: Secret (a volume populated by a Secret) SecretName: kube-etcd-client-certs Optional: false secret-prometheus-k8s-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-tls Optional: false secret-prometheus-k8s-proxy: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-proxy Optional: false secret-prometheus-k8s-thanos-sidecar-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-thanos-sidecar-tls Optional: false secret-kube-rbac-proxy: Type: Secret (a volume populated by a Secret) SecretName: kube-rbac-proxy Optional: false secret-metrics-client-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-client-certs Optional: false configmap-serving-certs-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: serving-certs-ca-bundle Optional: false configmap-kubelet-serving-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: kubelet-serving-ca-bundle Optional: false secret-prometheus-k8s-htpasswd: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-htpasswd Optional: false metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false secret-grpc-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-grpc-tls-bg9h55jpjel3o Optional: false prometheus-trusted-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-trusted-ca-bundle-2rsonso43rc5p Optional: true kube-api-access-gqzck: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none> Oct 13 10:20:16.867: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c init-config-reloader -n openshift-monitoring' Oct 13 10:20:17.069: INFO: Log for pod "prometheus-k8s-0"/"init-config-reloader" ----> level=info ts=2022-10-11T16:46:25.301002319Z caller=main.go:148 msg="Starting prometheus-config-reloader" version="(version=0.49.0, branch=rhaos-4.9-rhel-8, revision=fc23b05)" level=info ts=2022-10-11T16:46:25.301078043Z caller=main.go:149 build_context="(go=go1.16.12, user=root, date=20221006-18:49:18)" <----end of log for "prometheus-k8s-0"/"init-config-reloader" Oct 13 10:20:17.069: INFO: Running 'oc --kubeconfig=.kube/config logs pod/prometheus-k8s-0 -c prometheus -n openshift-monitoring' Oct 13 10:20:18.784: INFO: Log for pod "prometheus-k8s-0"/"prometheus" ----> level=error ts=2022-10-13T08:53:36.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:36.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:37.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:38.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:38.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:38.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:38.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:40.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:40.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:40.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:40.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:41.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:42.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:42.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:42.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:43.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:43.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:43.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:44.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:44.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:44.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:44.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:45.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:46.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:46.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:47.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:47.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:47.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:48.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.900Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:50.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:50.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:50.493Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:51.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:52.181Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88SKDNVAA0H9PCXSK9HB24.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T08:53:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:52.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:52.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:56.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:57.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:53:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:57.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:58.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:58.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:53:59.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:00.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:00.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:00.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:01.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:01.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:03.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:03.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:03.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:03.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:03.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:04.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:04.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:04.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:04.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:05.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:05.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:06.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:06.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:07.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:08.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:08.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:08.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:10.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:10.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:10.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:11.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:12.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:12.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:12.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:13.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:13.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:14.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:14.296Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:14.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:14.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:17.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:17.265Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:17.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:17.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.520Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:18.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.693Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.875Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.883Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:19.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:20.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:20.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:21.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:22.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:22.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:24.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:24.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:26.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:27.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:28.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:28.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:28.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:28.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:28.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:29.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:29.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:29.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:29.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:29.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:30.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:30.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:30.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:30.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:31.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:34.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:34.302Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:35.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:35.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:35.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:36.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:36.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:36.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:37.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:38.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:38.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:38.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:38.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:40.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:41.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:42.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:42.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:42.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:43.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:44.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:44.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:44.299Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:44.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:44.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:46.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:47.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:47.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:48.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.785Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:49.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:49.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:50.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:51.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:52.182Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88VE0P6A15D0F7AARVRKCZ.tmp-for-creation: no space left on device" level=error ts=2022-10-13T08:54:52.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:52.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:53.243Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:54.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:56.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:56.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:54:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:57.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:58.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:58.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:58.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:59.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:54:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:00.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:01.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:01.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:03.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:03.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:04.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:04.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:04.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:05.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:06.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:06.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:06.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:07.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:08.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:08.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:10.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:10.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:10.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:12.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:12.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:13.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.167Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:14.337Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:14.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:14.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:16.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:17.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:17.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:17.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:18.265Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:18.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.674Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.830Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:19.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:20.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:20.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:21.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:22.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:22.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:23.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:24.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:24.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:26.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:26.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:27.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:27.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:28.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:28.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:28.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:29.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:29.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:30.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:30.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:31.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:31.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:31.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:31.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:31.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:31.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:31.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:32.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:33.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:33.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:33.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:33.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:34.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:34.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:34.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:35.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:35.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:36.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:36.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:36.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:38.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:38.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:38.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:39.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:39.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:39.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:39.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:40.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:40.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:41.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:42.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:42.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:42.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:43.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:43.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:43.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:43.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:43.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.161Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:44.350Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:45.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:45.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:45.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:46.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:46.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:47.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:47.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:47.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.770Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:49.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:49.945Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:50.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:50.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:50.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:50.485Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:51.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:51.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:52.183Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88X8KP5XDYAWGB07SMBJ94.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T08:55:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:52.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:52.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:52.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:53.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:54.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:54.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:56.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:56.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:55:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:57.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:58.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:58.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:58.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:58.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:58.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:59.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:59.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:55:59.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:00.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:00.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:03.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:03.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:04.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:05.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:05.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:06.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:06.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:06.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:06.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:07.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:08.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:08.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:08.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:08.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:10.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:10.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:11.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:12.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:12.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.145Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:14.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:14.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:14.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:16.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:16.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:17.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:17.399Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:18.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.922Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:20.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:20.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:20.578Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:21.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:22.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:24.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:24.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:24.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:24.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:24.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:26.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:26.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:26.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:27.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:28.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:28.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:28.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:28.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:29.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:30.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:30.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:31.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:31.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:33.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:33.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:34.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:34.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:34.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:34.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:34.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:35.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:35.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:35.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:36.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:36.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:36.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:36.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:36.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:37.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:38.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:38.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:38.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:40.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:40.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:40.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:40.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:42.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:42.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:42.367Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:43.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.137Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:44.288Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:44.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:44.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:46.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:47.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:47.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:47.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:48.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:49.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:49.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:50.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:50.187Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:50.652Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:51.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:52.183Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF88Z36QGQ20DZWX8TNM2WBW.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T08:56:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:52.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:52.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:56.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:56.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:56:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:57.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:58.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:58.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:59.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:56:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:00.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:00.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:01.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:02.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:03.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:03.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:03.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:03.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:03.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:04.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:05.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:06.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:06.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:07.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:07.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:08.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:08.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:08.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:09.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:10.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:10.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:12.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:12.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:12.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:13.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:13.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:13.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.185Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:14.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:14.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:14.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:14.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:16.466Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:16.466Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:16.543Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:17.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:17.459Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:17.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:18.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.895Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:19.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:20.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:20.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:20.537Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:21.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:21.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:22.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:22.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:24.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:24.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:26.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:26.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:26.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:27.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:28.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:28.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:28.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:28.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:29.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:29.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:29.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:30.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:30.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:31.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:31.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:31.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:34.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:34.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:34.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:34.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:35.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:36.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:36.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:36.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:36.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:36.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:37.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:38.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:38.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:38.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:39.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:40.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:40.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:40.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:40.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:42.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:42.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:44.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.146Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:44.306Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:44.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:44.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:44.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:45.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:47.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:47.315Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:47.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:47.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:48.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.665Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.829Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:50.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:51.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:52.184Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF890XSR3NVQ36V7GQJ9G2EH.tmp-for-creation: no space left on device" level=error ts=2022-10-13T08:57:52.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:52.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:56.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:56.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:56.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:57:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:57.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:58.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:58.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:57:59.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:00.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:01.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:01.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:01.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:01.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:01.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:02.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:03.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:03.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:03.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:03.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:04.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:05.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:05.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:05.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:05.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:06.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:06.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:07.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:08.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:08.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:09.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:09.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:09.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:09.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:10.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:10.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:10.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:11.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:11.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:12.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:12.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:12.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:12.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:13.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:14.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:14.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:15.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:15.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:16.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:16.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:17.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:17.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:17.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:17.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:18.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.714Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.882Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.891Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:19.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:20.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:21.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:21.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:22.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:22.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:23.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:24.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:24.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:27.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:27.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:28.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:28.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:28.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:28.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:29.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:30.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:30.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:31.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:34.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:35.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:35.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:36.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:36.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:37.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:38.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:38.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:38.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:38.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:40.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:40.413Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:40.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:40.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:40.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:41.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:42.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:42.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:42.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:42.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:43.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.151Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:44.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.236Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:44.330Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:44.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:44.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:44.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:46.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:47.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:47.300Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.726Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.891Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.897Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:49.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:50.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:50.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:52.185Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF892RCSVWPXGF6QFT5JZB4Z.tmp-for-creation: no space left on device" level=error ts=2022-10-13T08:58:52.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:52.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:54.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:54.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:56.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:56.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:58:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:57.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:58.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:58.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:58.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:59.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:58:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:00.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:01.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:01.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:01.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:03.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:03.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:03.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:04.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:04.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:04.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:05.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:05.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:06.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:06.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:06.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:07.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:08.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:08.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:08.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:08.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:08.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:10.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:10.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:11.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:11.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:12.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:12.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:12.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:12.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:13.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.174Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:14.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:14.388Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:14.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:16.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:17.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:17.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.584Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.744Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.750Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:20.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:20.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:22.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:22.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:24.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:24.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:26.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:26.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:26.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:27.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:27.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:28.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:28.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:28.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:28.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:28.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:29.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:30.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:31.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:31.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:31.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:35.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:35.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:36.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:36.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:36.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:38.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:38.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:38.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:40.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:40.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:42.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:42.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:42.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:43.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.161Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:44.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:44.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:44.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:44.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:44.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:45.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:46.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:47.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:47.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:47.206Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:47.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:47.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:48.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.813Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:49.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:50.375Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:51.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:51.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:52.186Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF894JZTWVBECSW7HWXTEZJ7.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T08:59:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:52.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:52.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:54.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:54.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:56.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:56.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:56.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:57.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T08:59:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:58.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:58.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:58.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:58.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:59.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T08:59:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:00.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:00.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:01.492Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:01.492Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:01.493Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:01.493Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:01.493Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:01.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:01.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:03.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:03.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:04.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:04.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:04.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:04.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:05.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:05.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:06.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:06.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:06.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:07.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:07.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:08.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:08.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:08.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:08.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:08.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:10.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:11.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:12.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:12.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:13.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.171Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:14.328Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:14.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:14.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:16.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:16.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:17.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:17.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:17.318Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:17.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:17.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.910Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:19.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:20.092Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:20.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:20.522Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:21.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:22.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:22.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:24.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:24.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:26.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:26.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:27.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:28.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:28.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:28.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:28.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:29.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:30.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:30.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:30.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:31.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:31.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:32.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:33.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:34.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:34.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:34.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:34.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:34.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:35.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:35.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:35.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:35.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:36.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:36.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:36.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:37.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:38.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:38.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:39.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:39.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:39.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:39.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:40.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:40.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:41.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:42.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:42.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:42.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:42.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:43.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:44.402Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:44.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:45.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:45.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:45.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:47.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:47.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:47.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:47.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:48.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:49.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:49.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:50.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:50.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:50.549Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:51.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:51.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:52.187Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF896DJV5EKEQ94KN4DEB2QP.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:00:52.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:53.245Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:54.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:54.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:54.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:56.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:56.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:00:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:57.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:58.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:58.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:58.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:58.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:58.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:58.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:59.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:59.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:00:59.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:00.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:01.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:01.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:01.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:01.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:03.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:03.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:03.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:04.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:04.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:05.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:06.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:06.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:06.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:07.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:08.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:10.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:11.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:11.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:12.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:12.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:12.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:13.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:13.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.271Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:14.379Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:14.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:15.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:16.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:16.998Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:17.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:17.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:17.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:17.523Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:18.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:20.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:20.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:20.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:20.732Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:21.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:21.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:22.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:22.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:23.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:24.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:24.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:26.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:27.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:27.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:27.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:28.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:28.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:29.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:29.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:30.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:30.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:31.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:31.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:31.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:34.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:35.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:35.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:36.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:36.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:36.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:36.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:36.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:36.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:37.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:37.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:38.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:38.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:40.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:41.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:42.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:42.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:42.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:42.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:43.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:43.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:43.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:43.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:44.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.142Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:44.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:44.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:46.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:46.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:46.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:47.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:47.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:47.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:47.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:48.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.589Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.781Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.789Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:50.183Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:50.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:51.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:51.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:52.189Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89885XM369MDWN23YT72AD.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:01:52.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:52.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:54.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:54.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:54.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:56.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:56.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:56.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:57.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:01:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:57.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:58.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:58.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:58.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:58.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:59.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:59.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:01:59.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:01.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:01.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:01.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:01.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:01.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:03.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:03.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:04.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:04.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:05.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:06.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:06.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:07.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:08.527Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:08.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:10.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:10.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:10.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:10.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:12.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:12.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:12.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:13.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:13.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:14.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:14.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:14.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:15.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:17.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:17.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:17.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:17.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.639Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.802Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.808Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:20.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:21.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:22.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:22.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:26.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:26.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:28.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:28.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:28.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:28.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:28.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:29.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:29.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:30.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:30.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:31.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:31.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:31.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:31.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:31.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:31.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:33.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:33.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:33.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:34.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:34.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:34.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:34.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:35.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:35.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:35.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:36.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:36.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:37.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:37.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:38.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:38.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:38.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:40.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:40.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:41.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:41.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:42.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:42.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:42.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:43.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.096Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.315Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.453Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:44.572Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:45.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:46.539Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:46.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:46.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:47.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:47.407Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:48.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.738Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.932Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:49.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:50.317Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:51.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:52.190Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89A2RY3PJQ6DF73MW28F0H.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:02:52.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:52.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:54.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:54.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:56.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:56.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:02:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:58.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:58.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:58.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:02:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:00.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:00.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:00.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:01.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:01.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:01.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:02.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:03.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:04.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:04.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:04.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:04.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:05.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:05.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:06.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:06.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:07.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:08.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:08.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:08.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:08.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:08.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:08.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:09.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:09.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:09.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:09.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:09.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:10.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:10.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:10.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:11.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:11.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:12.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:12.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:12.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:12.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:12.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:13.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.187Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:14.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:14.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:14.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:15.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:15.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:16.087Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:16.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:17.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:17.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:17.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:17.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:18.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.732Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.898Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.905Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:20.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:20.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:21.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:22.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:24.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:26.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:26.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:26.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:27.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:27.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:28.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:28.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:28.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:28.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:29.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:29.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:30.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:31.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:31.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:31.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:31.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:33.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:33.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:33.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:34.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:35.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:35.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:35.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:36.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:36.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:36.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:36.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:36.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:37.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:38.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:38.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:40.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:40.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:40.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:40.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:41.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:42.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:42.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:42.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:43.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.189Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:44.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:44.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:46.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:46.535Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:46.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:46.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:47.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:48.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.762Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.927Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.933Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:49.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:50.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:50.385Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:51.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:52.191Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89BXBZ30M5WZHTJ1NB51SD.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:03:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:52.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:52.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:54.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:54.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:54.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:56.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:56.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:56.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:03:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:57.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:58.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:58.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:59.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:59.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:03:59.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:00.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:01.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:01.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:01.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:03.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:03.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:03.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:04.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:05.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:05.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:06.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:06.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:07.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:08.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:08.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:08.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:08.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:08.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:10.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:10.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:10.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:11.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:12.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:13.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.202Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.288Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:14.378Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:14.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:14.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:14.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:15.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:16.996Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:17.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:17.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:18.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.758Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:19.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:20.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:21.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:22.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:22.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:23.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:24.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:24.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:26.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:26.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:26.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:27.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:28.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:28.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:28.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:29.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:30.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:30.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:34.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:35.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:35.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:36.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:36.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:36.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:37.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:38.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:38.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:38.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:38.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:38.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:40.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:40.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:41.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:42.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:42.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:42.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:43.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:43.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:44.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.292Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:44.416Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:44.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:44.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:46.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:47.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:47.352Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:47.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:47.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:48.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.751Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.936Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:49.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:50.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:51.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:52.192Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89DQZ061CFFYP7PAVQQ4N1.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:04:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:52.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:52.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:04:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:57.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:58.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:58.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:58.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:59.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:04:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:00.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:01.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:01.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:01.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.747Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:03.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:03.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:03.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:03.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:03.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:04.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:04.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:04.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:04.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:05.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:06.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:07.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:08.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:08.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:08.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:08.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:10.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:11.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:12.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:12.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:12.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:13.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:13.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.276Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:14.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:14.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:14.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:15.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:16.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:16.471Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:16.472Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:16.549Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:16.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:17.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:17.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:17.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:17.572Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:18.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:20.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:20.308Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:20.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:20.702Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:21.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:22.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:22.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:26.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:28.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:28.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:28.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:28.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:28.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:29.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:30.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:30.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:30.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:31.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:31.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:32.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:33.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:33.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:33.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:34.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:34.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:34.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:35.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:35.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:35.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:35.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:36.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:36.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:36.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:37.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:38.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:38.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:38.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:38.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:38.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:39.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:39.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:39.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:39.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:39.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:40.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:40.718Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:41.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:42.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:42.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:42.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:42.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:42.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:43.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:43.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:43.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:44.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:44.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:44.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:44.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:44.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:45.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:45.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:47.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:47.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:48.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.752Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.908Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.915Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:50.311Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:51.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:51.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:52.193Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89FJJ102PQFY22KYFRT94W.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:05:52.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:52.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:54.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:56.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:56.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:56.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:56.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:05:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:57.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:58.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:58.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:58.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:58.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:59.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:59.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:05:59.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:00.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:00.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:00.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:01.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:01.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:02.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:03.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:03.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:03.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:05.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:06.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:06.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:06.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:06.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:07.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:08.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:08.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:08.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:08.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:08.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:09.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:10.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:10.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:10.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:11.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:12.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:12.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:12.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:13.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:14.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:14.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:14.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:15.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:17.002Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:17.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.808Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:20.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:20.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:20.413Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:21.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:22.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:22.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:22.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:24.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:24.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:26.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.669Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:27.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:28.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:28.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:29.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:30.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:31.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:33.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:33.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:33.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:34.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:34.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:35.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:35.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:36.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:36.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:36.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:36.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:37.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:38.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:38.418Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:38.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:40.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:40.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:40.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:42.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:42.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:42.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:42.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:43.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:43.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:43.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.167Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.261Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:44.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:44.345Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:44.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:46.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:47.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.698Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.888Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.897Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:50.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:51.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:52.194Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89HD51D3WCB7ZGX74J2VTV.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:06:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:52.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:52.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:52.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:52.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:54.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:54.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:56.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:56.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:06:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:57.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:58.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:58.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:06:59.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:00.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:00.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:01.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:03.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:03.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:04.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:04.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:04.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:04.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:05.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:06.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:06.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:08.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:08.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:10.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:10.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:10.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:11.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:11.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:12.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:12.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:12.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.106Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:13.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:13.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.155Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:14.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:14.364Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:14.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:14.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:16.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:17.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:17.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:17.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:17.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.668Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.839Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.847Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:20.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:21.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:22.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:22.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:23.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:24.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:24.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:24.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:26.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:26.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:27.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:27.987Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:28.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:28.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:28.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:29.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:29.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:29.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:29.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:30.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:31.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:31.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:31.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:31.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:31.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:34.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:35.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:35.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:35.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:36.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:36.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:36.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:37.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:38.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:38.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:38.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:40.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:40.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:41.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:42.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:42.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:43.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.204Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:44.378Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:45.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:47.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:47.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:47.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:47.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:47.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.768Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:49.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:50.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:50.533Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:51.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:52.195Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89K7R3MB2YWJ7H69Z1BA3Q.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:07:52.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:52.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:54.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:54.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:56.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:56.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:07:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:58.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:58.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:58.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:59.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:07:59.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:00.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:00.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:01.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:01.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:02.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:03.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:03.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:03.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:03.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:04.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:04.302Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:04.303Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:04.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:04.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:05.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:05.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:05.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:06.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:06.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:07.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:08.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:08.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:08.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:08.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:09.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:09.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:10.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:10.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:10.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:11.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:11.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:12.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:12.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:12.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:12.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.371Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:13.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.078Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.164Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:14.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:14.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:14.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:15.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:15.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:17.072Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:17.311Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:17.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:18.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.726Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.888Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.895Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:19.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:20.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:20.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:21.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:21.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:22.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:22.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:24.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:24.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:26.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:26.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:27.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:28.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:28.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:28.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:28.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:29.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:29.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:30.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:31.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:31.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:31.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:34.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:34.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:34.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:35.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:35.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:36.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:36.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:38.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:38.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:38.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:39.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:39.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:39.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:39.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:40.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:40.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:40.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:40.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:40.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:41.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:42.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:42.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:42.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:43.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:43.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:43.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:44.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:44.348Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:44.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:47.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:47.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:47.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:48.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.627Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.805Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.812Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:50.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:51.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:52.196Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89N2B4507E3CGY688Q5N8Y.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:08:52.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:52.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:54.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:54.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:54.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:54.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:54.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:56.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:56.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:08:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:57.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:58.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:58.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:58.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:08:59.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:00.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:00.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:01.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:01.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:01.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:03.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:03.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:03.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:03.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:04.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:05.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:05.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:06.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:06.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:07.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:08.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:08.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:10.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:10.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:11.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:12.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:12.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:12.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:12.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:13.940Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.001Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.148Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:14.323Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:14.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:14.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:16.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:16.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:16.996Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:17.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:17.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:17.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:17.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:18.269Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:18.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.755Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.917Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.923Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:19.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:20.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:20.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:21.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:22.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:26.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:27.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:27.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:28.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:28.744Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:28.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:29.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:30.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:30.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:30.589Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:31.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:31.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:34.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:34.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:35.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:35.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:35.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:35.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:35.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:36.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:36.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:36.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:36.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:37.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:38.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:38.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:38.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:38.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:39.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:39.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:39.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:39.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:39.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:40.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:40.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:40.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:41.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:42.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:42.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:42.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:43.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:43.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:43.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:43.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:44.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:44.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:44.332Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:44.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:45.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:46.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:47.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:47.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:47.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.652Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.813Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.821Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:49.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:50.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:50.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:51.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:51.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:52.197Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89PWY5GVP21KBFFYZJKARR.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:09:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:52.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:52.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:54.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:54.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:56.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:56.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:56.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:56.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:57.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:09:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:57.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:58.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:58.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:59.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:09:59.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:00.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:00.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:00.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:01.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:01.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:01.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:03.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:03.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:04.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:05.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:06.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:06.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:08.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:08.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:10.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:10.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:11.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:12.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:12.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:12.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:13.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:13.959Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:13.974Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:14.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:14.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:17.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:17.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:17.380Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:17.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:18.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:20.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:20.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:20.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:20.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.016Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:22.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.642Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.642Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:22.643Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:22.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:23.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:23.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:24.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:24.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:24.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:24.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:26.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:27.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:28.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:28.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:28.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:29.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:29.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:30.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:30.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:31.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:31.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:34.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:34.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:34.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:34.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:34.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:35.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:35.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:36.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:36.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:36.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:36.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:37.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:38.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:38.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:38.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:38.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:39.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:39.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:39.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:39.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:40.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:40.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:41.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:42.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:42.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:42.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:42.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:42.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:43.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:43.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:44.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:44.452Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:45.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:45.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:47.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:47.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:47.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:48.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.735Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.900Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.908Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:50.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:51.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:51.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:52.198Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89RQH60C07V5YWGHGB0JBC.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:10:52.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:54.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:54.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:56.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:56.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:56.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:57.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:10:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:57.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:58.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:58.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:58.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:58.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:59.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:10:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:00.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:01.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:03.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:03.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:03.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:03.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:04.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:04.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:04.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:05.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:06.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:06.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:06.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:07.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:08.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:08.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:08.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:08.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:10.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:12.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:12.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:13.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:14.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.333Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:14.424Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:14.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:14.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:14.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:17.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:17.345Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:17.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.935Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:20.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:20.131Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:20.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:20.539Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:21.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:21.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:22.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:22.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:24.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:26.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:27.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:27.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:27.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:28.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:28.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:28.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:28.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:28.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:29.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:29.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:30.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:30.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:31.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:31.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:34.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:34.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:35.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:36.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:36.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:36.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:37.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:37.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:38.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:38.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:38.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:38.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:40.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:41.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:42.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:42.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:42.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:42.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:42.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:43.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:44.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.128Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.252Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:44.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.360Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:44.473Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:44.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:44.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:46.996Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:47.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:47.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:47.304Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:47.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:47.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:48.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:50.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:50.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:50.220Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:50.625Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:51.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:52.199Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89TJ474JNSBTF8SZB61DN9.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:11:52.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:54.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:54.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:56.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:56.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:11:57.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:57.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:58.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:58.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:58.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:59.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:11:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:00.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:00.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:01.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:01.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:01.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:01.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:01.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:03.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:03.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:04.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:04.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:04.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:05.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:05.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:06.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:06.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:08.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:08.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:08.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:10.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:10.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:11.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:12.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:12.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.666Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:13.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:14.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:14.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:14.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:14.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:14.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:15.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:15.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:16.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:17.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:17.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.675Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.867Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.877Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:20.308Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:21.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:22.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:24.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:24.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:24.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:26.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:27.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:28.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:28.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:28.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:29.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:30.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:30.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:31.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:31.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:34.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:35.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:35.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:36.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:36.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:36.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:37.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:38.189Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:38.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:40.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:40.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:40.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:41.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:42.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:42.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:42.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:43.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:43.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:43.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.097Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.183Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.272Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:44.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:44.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:45.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:45.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:46.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:46.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:46.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:47.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:47.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:47.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:47.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:47.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:48.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.907Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:50.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:50.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:50.543Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:51.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:52.200Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89WCQ8W8E2VHBAF3ETTBM2.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:12:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:52.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:52.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:54.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:54.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:54.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:56.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:56.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:56.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:12:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:57.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:58.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:58.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:58.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:59.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:12:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:00.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:00.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:01.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:01.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:01.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:02.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:03.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:03.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:03.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:03.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:04.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:04.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:04.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:04.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:04.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:05.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:05.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:05.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:05.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:06.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:06.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:06.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:06.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:07.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:08.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:08.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:08.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:09.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:09.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:09.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:09.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:10.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:10.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:10.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:10.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:11.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:11.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:12.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:12.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:12.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:12.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:13.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.296Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:14.405Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:14.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:14.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:15.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:15.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:16.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:16.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:16.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:17.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:17.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:17.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:17.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:17.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:17.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:18.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:20.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:20.302Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:20.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:20.813Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:21.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:21.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:22.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:22.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:23.090Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:24.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:24.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:24.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:24.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:26.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:27.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:27.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:28.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:28.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:28.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:28.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:28.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:29.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:29.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:30.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:31.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:31.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:31.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:33.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:34.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:35.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:36.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:36.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:36.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:36.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:37.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:37.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:38.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:38.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:38.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:38.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:40.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:40.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:41.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:41.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:42.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:42.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:42.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:43.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.233Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:44.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.328Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:44.429Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:44.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:44.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:46.087Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:46.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:47.237Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:47.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.648Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.822Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.830Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:49.942Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:49.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:50.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:51.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:52.201Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF89Y7A8BDF7JKN7588ASCXB.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:13:52.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:52.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:52.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:53.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:54.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:54.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:54.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:54.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:56.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:56.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:56.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:56.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:57.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:13:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:57.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:58.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:58.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:58.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:58.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:59.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:59.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:13:59.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:00.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:01.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:01.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:03.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:03.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:03.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:06.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:06.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:07.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:08.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:08.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:08.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:08.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:10.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:10.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:10.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:12.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:12.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:12.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:12.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:13.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.200Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:14.380Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:14.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:14.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:16.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:17.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:17.348Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:17.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.826Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:19.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:20.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:20.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:20.633Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:21.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:22.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:22.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:24.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:26.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:26.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:27.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:28.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:28.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:28.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:28.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:29.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:29.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:29.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:29.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:30.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:30.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:31.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:31.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:31.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:33.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:33.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:33.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:34.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:35.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:35.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:36.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:36.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:36.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:36.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:37.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:37.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:38.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:38.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:38.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:38.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:40.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:40.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:41.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:42.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:42.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:42.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:42.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:43.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.205Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.307Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:44.409Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:44.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:46.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:47.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:47.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:47.414Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:47.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:48.262Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:48.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.845Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:49.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:50.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:50.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:50.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:51.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:52.201Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A01X95W2VE1B81V0SS7WX.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:14:52.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:52.603Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:52.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:54.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:54.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:56.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:56.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:14:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:57.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:58.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:58.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:58.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:58.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:14:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:00.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:01.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:01.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:02.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:03.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:03.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:03.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:04.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:04.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:05.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:05.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:06.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:06.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:07.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:08.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:08.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:08.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:10.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:10.414Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:11.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:12.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:12.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:12.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:13.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:13.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:13.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:14.352Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:14.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:16.468Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:16.469Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:16.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:16.998Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:17.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:17.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:17.226Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:17.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:17.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:18.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.634Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.794Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.801Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:19.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:20.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:21.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:22.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:22.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:24.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:24.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:26.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:27.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:27.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:28.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:28.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:29.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:30.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:30.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:30.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:30.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:31.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:31.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:31.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:32.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:33.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:33.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:34.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:34.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:34.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:34.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:35.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:35.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:35.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:35.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:36.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:36.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:36.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:37.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:37.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:38.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:38.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:38.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:38.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:39.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:39.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:39.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:39.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:39.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:39.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:40.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:40.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:40.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:41.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:41.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:42.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:42.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:42.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:42.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:42.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:43.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.270Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:44.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:44.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:44.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:44.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:45.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:45.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:45.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:47.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:47.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:47.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:47.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:48.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.845Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.853Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:49.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:50.253Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:51.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:51.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:52.202Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A1WGAJEVAJ12SMB1HYBXK.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:15:52.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:52.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:54.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:54.513Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:54.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:56.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:56.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:56.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:15:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:57.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:58.416Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:58.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:58.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:58.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:15:59.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:00.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:01.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:01.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:01.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:03.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:03.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:04.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:04.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:04.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:05.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:05.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:06.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:06.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:06.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:06.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:07.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:08.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:08.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:08.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:08.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:08.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:08.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:10.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:10.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:12.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:12.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:12.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:13.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.279Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:14.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:14.377Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:14.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:15.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:15.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:16.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:16.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:17.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:17.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:17.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:17.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:18.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.684Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.856Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:20.289Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:21.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:22.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:22.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:22.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:23.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:24.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:26.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:26.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:27.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:28.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:28.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:29.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:29.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:30.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:30.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:30.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:31.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:31.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:31.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:32.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:33.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:33.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:33.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:34.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:34.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:34.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:35.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:36.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:36.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:36.338Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:36.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:36.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:37.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:38.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:38.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:38.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:40.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:40.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:41.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:42.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:42.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:42.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:42.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:43.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:43.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:43.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.160Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.247Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:44.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:44.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:44.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:44.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:45.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:46.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:46.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:47.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:47.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:47.326Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:47.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:48.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.749Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:49.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:50.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:50.327Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:51.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:52.203Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A3Q3BFPC6BB2MSQ7R687K.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:16:52.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:52.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:54.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:56.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:16:57.728Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:57.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:58.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:58.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:58.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:59.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:16:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:00.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:01.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:02.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:02.735Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:02.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:02.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:02.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:03.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:03.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:04.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:04.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:04.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:05.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:05.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:06.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:06.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:07.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:08.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:08.421Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:10.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:10.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:10.729Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:10.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:10.978Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:12.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:12.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:12.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:13.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:13.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:14.463Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:14.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:14.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:15.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:16.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:17.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:17.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:18.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.665Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.824Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.832Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:20.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:20.282Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:21.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:22.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.604Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:22.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:22.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:26.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:27.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:28.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:28.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:28.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:29.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:29.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:30.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:31.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:31.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:34.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:34.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:34.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:35.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:35.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:36.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:36.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:36.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:37.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:38.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:38.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:38.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:40.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:40.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:40.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:41.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:42.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:42.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:42.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:43.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:44.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.215Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:44.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:44.456Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:44.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:44.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:45.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:46.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:46.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:47.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:47.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:47.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:47.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:48.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.643Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.808Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.815Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:50.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:51.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:52.204Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A5HPB0NXV76Y81MNW293G.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:17:52.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:54.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:54.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:56.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:56.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:17:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:57.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:58.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:58.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:58.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:58.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:59.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:59.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:17:59.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:00.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:00.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:01.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:01.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:01.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:01.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:03.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:03.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:03.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:04.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:04.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:04.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:05.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:05.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:05.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:06.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:06.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:06.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:06.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:08.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:08.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:08.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:08.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:09.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:09.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:09.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:10.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:10.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:10.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:11.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:11.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:12.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:12.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:12.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:12.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.089Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:13.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.005Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.193Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:14.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:14.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:14.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:14.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:15.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:15.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:15.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:16.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:17.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:17.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:17.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:18.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.679Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.862Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.869Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:19.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:20.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:21.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:21.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:22.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:22.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:24.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:24.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:24.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:26.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:26.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:26.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:27.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:28.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:28.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:28.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:28.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:29.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:30.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:31.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:31.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:31.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:34.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:34.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:35.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:35.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:36.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:36.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:36.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:36.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:36.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:37.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:38.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:38.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:38.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:38.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:40.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:40.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:41.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:42.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:44.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.202Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:44.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:44.400Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:44.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:44.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:44.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:44.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:46.465Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:46.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:47.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:47.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:47.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.519Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.523Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.524Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.526Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.834Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:49.947Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:49.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:50.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:50.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:50.538Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:51.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:51.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:52.204Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A7C9CRVTV03AMKE9HM2M7.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:18:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:52.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:52.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:54.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:54.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:54.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:54.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:54.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:54.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:56.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:56.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:56.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:18:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:57.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:58.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:58.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:58.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:58.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:58.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:59.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:59.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:59.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:18:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:00.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:00.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:00.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:01.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:03.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:03.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:04.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:04.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:04.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:04.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:05.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:05.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:06.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:06.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:08.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:08.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:08.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:08.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:09.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:09.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:09.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:10.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:10.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:10.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:10.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:11.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:12.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:12.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:12.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:13.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:13.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:13.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:13.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.203Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:14.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.294Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:14.387Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:14.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:14.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:15.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:15.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:16.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:16.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:17.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:17.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:17.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:17.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:17.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.654Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.817Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.825Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:19.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:20.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:21.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:22.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:22.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:24.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:26.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:26.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:27.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:27.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:28.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:28.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:28.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:29.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:29.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:30.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:30.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:30.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:31.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:31.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:33.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:34.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:34.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:34.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:35.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:35.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:35.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:35.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:35.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:36.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:36.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:36.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:36.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:37.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:37.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:38.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:38.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:40.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:40.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:41.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:42.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:42.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:43.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:43.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:43.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:43.667Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:43.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:43.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.145Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.230Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:44.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:44.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:44.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:44.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:44.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:45.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:45.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:46.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:47.252Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:47.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:48.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.797Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.957Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:49.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:50.388Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:51.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:51.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:51.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:52.205Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8A96WD3VA8KK51V5HFAHT4.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:19:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:52.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:54.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:56.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:56.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:56.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:19:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:58.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:59.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:19:59.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:00.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:00.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:01.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:01.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:03.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:03.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:04.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:05.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:06.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:06.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:07.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:08.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:08.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:08.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:10.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:10.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:10.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:11.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:11.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:12.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:12.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:12.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:13.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.317Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:14.415Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:14.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:14.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:16.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:17.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:17.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:17.359Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:17.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.879Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:20.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:20.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:20.556Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:22.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:24.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:26.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:26.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:26.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:28.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:28.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:28.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:28.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:28.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:29.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:29.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:30.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:30.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:31.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:31.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:31.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:31.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:31.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:33.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:35.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:35.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:36.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:36.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:36.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:38.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:38.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:38.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:38.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:40.052Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:41.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:41.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:42.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:42.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:42.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.086Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.664Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:43.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:44.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.226Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:44.430Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:44.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:44.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:45.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:45.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:47.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:47.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:47.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:47.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:48.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.817Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:49.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:50.428Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:51.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:51.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:52.206Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AB1FEMVKJS2VNHACSQQRN.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:20:52.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.605Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:52.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:53.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:54.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:54.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:56.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:56.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:57.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.670Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:20:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:57.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:58.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:58.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:58.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:59.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:59.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:20:59.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:00.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:00.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:01.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:01.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:01.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:01.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:01.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.738Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:03.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:04.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:04.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:05.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:05.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:05.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:06.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:06.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:06.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:07.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:08.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:08.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:08.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:08.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:08.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:08.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:10.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:12.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:12.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:12.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:14.411Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:14.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:15.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:16.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:17.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:17.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:17.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.798Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:20.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:20.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:20.462Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:21.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:22.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:22.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:24.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:26.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:26.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:27.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:28.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:28.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:28.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:28.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:28.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:29.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:29.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:30.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:30.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:31.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:31.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:31.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:31.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:31.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:34.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:34.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:34.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:35.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:36.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:36.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:36.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:37.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:37.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:38.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:38.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:38.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:38.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:39.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:39.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:39.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:39.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:40.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:40.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:40.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:40.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:41.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:42.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:42.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:42.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.096Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.096Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.227Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:44.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.335Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:44.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:44.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:44.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:45.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:45.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:46.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:46.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:47.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:47.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:47.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:47.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:47.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:48.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.808Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:49.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.977Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:49.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:50.467Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:51.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:52.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:52.207Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8ACW2F8PSXAW38YC33MZQV.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:21:52.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:52.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:54.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:54.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:56.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:21:57.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:58.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:58.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:58.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:58.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:59.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:21:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:00.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:01.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:01.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:01.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:02.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:03.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:03.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:03.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:04.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:05.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:05.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:06.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:06.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:06.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:07.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:08.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:08.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:08.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:10.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:10.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:10.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:11.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:11.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:12.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:12.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:12.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:12.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:13.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:13.968Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:13.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.337Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:14.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:14.669Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:14.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:15.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:16.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:16.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:16.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:17.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:17.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:17.290Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.756Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:18.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.718Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.879Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.889Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:19.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:20.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:20.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:21.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:21.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:22.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:22.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:24.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:26.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:26.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:28.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:28.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:28.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:29.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:29.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:30.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:31.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:31.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:31.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:32.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:32.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:32.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:32.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.524Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:34.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:34.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:34.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:34.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:34.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:35.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:35.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:35.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:36.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:36.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:36.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:37.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:38.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:38.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:38.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:40.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:41.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:42.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:42.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:42.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.363Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:43.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:43.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:43.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:43.993Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.182Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.284Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:44.379Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:44.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:46.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:47.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:47.321Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:47.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.910Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:50.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:50.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:50.515Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:51.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:51.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:52.208Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AEPNGXJRX9FDT16AH7YMY.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:22:52.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:54.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:54.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:56.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:56.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:56.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:57.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:22:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:57.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:58.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:58.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:59.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:22:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:01.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:01.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:01.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:01.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.742Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:03.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:03.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:03.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:04.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:04.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:05.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:05.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:05.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:06.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:07.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:07.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:08.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:08.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:08.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:10.082Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:10.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:11.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:12.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:12.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.078Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.236Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.351Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:14.452Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:14.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:14.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:14.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:15.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:15.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:16.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:16.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:17.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:17.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:17.260Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:17.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:17.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:18.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.790Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:19.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:20.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:20.357Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:22.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:24.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:26.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:26.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:27.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:27.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:27.741Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:28.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:28.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:28.748Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:29.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:30.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:30.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:31.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:31.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:31.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:31.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:33.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:33.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:34.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:34.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:34.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:34.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:34.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:35.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:35.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:35.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:36.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:36.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:36.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:37.628Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:38.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:38.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:38.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:38.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:40.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:40.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:40.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:40.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:40.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:41.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:42.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:42.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:42.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:43.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:43.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.365Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:44.492Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:44.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:45.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:46.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:46.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:46.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:47.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:47.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:47.286Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:47.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:47.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.775Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.930Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.938Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:50.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:51.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:52.209Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AGH8H7N0NZY9P81HPJ4TE.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:23:52.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:52.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:54.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:56.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:56.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:23:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:57.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:58.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:58.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:58.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:59.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:23:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:00.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:01.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:01.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:01.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:01.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:03.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:03.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:03.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:03.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:04.302Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:04.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:04.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:05.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:05.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:05.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:06.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:06.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:06.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:06.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:07.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:08.420Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:08.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:08.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:08.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:08.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:09.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:09.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:09.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:10.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:10.105Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:10.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:10.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:11.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:11.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:12.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:12.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:12.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:12.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:12.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:13.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.069Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.163Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:14.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:14.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:14.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:15.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:15.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:15.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:16.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:16.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:16.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:17.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:17.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:17.280Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:17.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:17.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.769Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.926Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.934Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:19.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:20.322Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:21.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:21.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:22.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:22.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:24.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:24.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:26.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:27.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:27.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:28.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:28.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:28.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:28.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:29.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:29.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:29.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:30.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:30.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:31.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:31.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:31.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:33.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:33.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:33.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:33.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:34.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:34.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:34.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:34.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:34.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:35.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:35.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:36.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:36.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:36.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:36.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:37.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:38.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:38.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:40.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:40.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:42.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:42.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:42.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:43.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:43.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:43.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:43.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:43.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:44.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:44.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:45.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:46.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:47.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:47.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:47.366Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:49.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:50.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:50.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:50.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:50.460Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:50.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:51.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:52.211Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AJBVJR57HSS3MN3E8XZJJ.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:24:52.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:52.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:53.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:54.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:56.007Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:56.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:56.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:56.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:56.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:24:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:58.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:58.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:58.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:59.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:59.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:59.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:24:59.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:00.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:01.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:01.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:01.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:01.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:01.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:03.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:03.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:04.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:04.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:04.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:04.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:05.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:05.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:05.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:06.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:06.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:07.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:08.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:08.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:10.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:10.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:10.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:12.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:12.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.099Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:13.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:13.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.079Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.305Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:14.396Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:14.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:14.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:15.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:16.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:17.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:17.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:17.268Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:18.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.773Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.933Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:19.941Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:20.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:20.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:21.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:22.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:22.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:24.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:24.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:24.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:24.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:26.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:26.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:26.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:27.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.751Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.752Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:27.753Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:28.011Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:28.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:28.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:28.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:28.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:28.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:29.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:29.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:29.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:29.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:30.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:30.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:30.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:31.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:31.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:31.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:35.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:35.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:36.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:36.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:36.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:37.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:38.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:38.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:38.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:40.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:40.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:40.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:40.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:42.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:42.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:42.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.087Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:43.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.363Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:44.467Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:44.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:46.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:46.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:47.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:47.171Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:47.195Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:48.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.689Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.851Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.859Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:49.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:49.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:50.282Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:51.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:51.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:52.212Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AM6EM9VNZD3D8E4NREAM7.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:25:52.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:52.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:54.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:54.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:56.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:56.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:57.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:25:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:57.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:58.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:58.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:58.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:59.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:25:59.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:00.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:01.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:01.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:01.494Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:01.495Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:01.495Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:01.496Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:01.496Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:01.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:01.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:03.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:03.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:04.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:05.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:05.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:06.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:06.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:08.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:08.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:08.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:08.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:10.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:11.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:12.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:12.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:13.942Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:13.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.131Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.264Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:14.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:14.526Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:14.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:15.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:15.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:16.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:17.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:17.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:17.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:17.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:18.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.703Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.868Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.878Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:20.337Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:21.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:21.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:22.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:24.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:24.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:24.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:24.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:26.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:26.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:28.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:28.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:28.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:28.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:28.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:29.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:29.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:29.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:30.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:30.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:30.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:31.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:31.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:31.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:31.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:32.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:33.994Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:34.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:34.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:34.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:35.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:35.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:35.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:36.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:36.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:36.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:36.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:38.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:38.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:38.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:39.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:39.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:39.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:39.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:39.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:40.005Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:40.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:40.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:40.987Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:41.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:41.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:42.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:42.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:42.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:42.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:43.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:43.961Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:43.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.103Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:44.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:44.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.303Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:44.406Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:45.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:45.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:46.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:47.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:47.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:47.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:47.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:47.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:47.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:48.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.714Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.928Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:50.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:51.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:51.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:52.053Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:52.213Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AP11NNYWC67QDAE4CEY4S.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:26:52.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:52.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:52.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:54.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:56.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:56.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:56.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:26:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:57.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:58.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:58.380Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:58.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:58.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:59.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:59.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:26:59.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:00.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:01.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:01.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:01.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:01.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:01.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:03.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:03.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:04.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:04.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:04.445Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:04.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:05.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:05.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:05.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:06.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:06.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:06.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:08.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:08.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:08.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:10.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:10.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:10.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:11.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:12.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:12.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:12.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:12.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:13.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.232Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.355Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:14.446Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:14.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:14.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:15.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:16.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:16.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:17.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:17.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:18.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.698Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.893Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.901Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:20.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:20.297Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:21.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:22.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:24.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:24.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:24.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:26.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.052Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:27.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:28.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:28.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:28.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:28.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:29.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:29.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:30.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:30.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:31.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:31.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:31.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.552Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.553Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.553Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.554Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.554Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:32.555Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:32.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:34.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:34.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:34.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:35.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:35.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:35.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:36.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:36.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:36.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:36.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:37.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:38.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:38.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:38.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:38.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:40.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:40.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:41.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:41.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:42.070Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:42.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:42.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.259Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.392Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:44.516Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:44.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:44.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:47.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:47.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:48.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.676Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.836Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.845Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:50.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:51.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:52.214Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AQVMP4ZQMPS2JDJ2JV0EM.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:27:52.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:52.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:54.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:54.728Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:54.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:56.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.725Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:27:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:57.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:58.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:58.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:59.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:59.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:59.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:27:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:00.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:01.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:03.084Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:03.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:03.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:04.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:04.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:05.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:05.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:06.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:06.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:06.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:06.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:07.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:08.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:08.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:08.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:08.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:08.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:10.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:10.719Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:11.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:11.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:12.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:12.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:12.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.663Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:13.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:13.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.134Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.255Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.358Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:14.448Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:14.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:15.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:16.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:16.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:17.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:17.156Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:17.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:17.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:18.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.488Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.650Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.658Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:20.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:21.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:22.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:23.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:24.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:26.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:26.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:27.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:28.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:28.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:28.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:28.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:28.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:29.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:29.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:29.946Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:29.951Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:30.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:31.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:31.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:31.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:31.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:32.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:33.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:33.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:33.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:34.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:34.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:34.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:34.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:34.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:35.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:36.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:36.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:36.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:37.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:38.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:38.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:38.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:38.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:39.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:40.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:42.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:42.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:42.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:43.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.246Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:44.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.390Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:44.514Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:44.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:44.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:45.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:45.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:46.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:46.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:47.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:47.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:47.398Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:48.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.794Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.958Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:49.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:50.436Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:51.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:51.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:52.215Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8ASP7QAPZRP62881BMVE38.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:28:52.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:54.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:56.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:57.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:28:57.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:58.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:58.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:58.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:58.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:58.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:58.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:59.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:28:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:00.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:01.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:01.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:01.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:01.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:01.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.739Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.776Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:02.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:03.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:03.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:03.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:04.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:04.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:04.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:04.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:04.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:05.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:06.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:06.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:06.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:07.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:08.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:08.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:08.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:08.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:09.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:09.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:09.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:10.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:10.134Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:10.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:10.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:11.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:11.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:12.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:12.369Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:12.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:12.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:12.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:13.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.155Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:14.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.374Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:14.476Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:14.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:14.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:14.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:15.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:15.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:16.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:17.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:17.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:17.293Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:17.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:18.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.889Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:19.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:20.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:20.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:20.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:20.594Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:21.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:22.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:22.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:24.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:26.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:26.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:26.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:27.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:28.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:28.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:30.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:30.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:31.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:31.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:31.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:31.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:32.191Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:32.740Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:32.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:34.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:34.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:34.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:35.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:35.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:35.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:36.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:36.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:36.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:38.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:38.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:38.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:40.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:40.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:40.979Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:41.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:42.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:42.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.095Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:43.947Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:43.970Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:44.457Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:44.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:44.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:44.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:46.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:46.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:47.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:47.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:47.245Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:47.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:47.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.680Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:48.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.775Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:49.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:49.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:50.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:50.468Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:51.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:51.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:52.216Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AVGTR7ANW50JD02BYG6R9.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:29:52.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:52.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:54.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:56.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.672Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:29:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:58.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:58.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:58.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:59.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:29:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:00.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:01.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:01.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:03.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:03.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:03.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:03.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:04.192Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:04.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:04.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:05.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:05.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:06.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:06.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:07.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:08.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:08.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:08.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:10.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:10.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:10.720Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:12.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:12.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:12.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:13.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:13.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:13.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.224Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.316Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:14.414Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:14.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:14.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:15.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:16.108Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:16.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:16.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:17.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:17.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:17.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:17.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:18.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.515Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.516Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.517Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.518Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:19.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:19.943Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:20.186Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:20.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:20.389Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:20.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:20.809Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:21.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:21.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:22.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:22.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:23.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:24.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:26.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:26.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:26.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:28.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:28.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:28.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:28.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:28.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:29.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:29.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:30.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:30.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:31.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:31.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:31.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:34.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:34.443Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:34.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:34.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:35.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:35.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:35.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:36.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:36.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:36.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:37.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:37.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:38.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:38.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:40.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:41.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:42.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:42.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:42.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:43.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.139Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:44.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.348Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:44.477Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:44.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:45.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:45.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:46.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:47.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:47.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:47.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:47.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:47.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.591Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.751Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.760Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:50.207Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:50.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:51.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:52.216Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AXBDRNFQ2ZEWWQMS8Y2C1.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:30:52.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:52.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:54.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:54.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:56.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:56.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:56.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:30:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:57.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:58.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:58.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:58.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:59.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:30:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:01.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:03.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:03.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:03.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:04.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:04.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:05.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:05.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:06.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:06.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:06.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:07.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:07.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:08.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:08.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:08.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:08.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:08.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:09.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:09.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:10.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:10.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:10.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:10.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:10.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:11.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:12.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:12.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:12.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:13.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:13.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:13.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.070Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.140Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.239Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:14.455Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:14.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:14.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:15.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:16.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:17.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:17.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:17.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:17.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.677Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.911Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.919Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:19.942Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:19.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:20.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:20.372Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:21.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:21.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:22.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:22.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:24.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:24.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:26.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:26.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:26.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.113Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:27.444Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:27.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:27.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:28.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:28.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:28.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:29.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:29.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:30.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:30.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:31.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:31.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:32.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:33.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:33.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:33.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:34.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:34.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:34.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:35.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:35.832Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:36.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:36.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:36.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:36.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:38.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:38.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:38.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:38.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:38.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:38.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:39.167Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:39.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:39.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:40.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:40.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:40.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:41.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:42.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:42.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:42.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:42.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:42.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:43.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.004Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:44.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.325Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:44.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:44.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:44.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:44.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:45.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:45.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:47.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:47.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:47.301Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:47.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:47.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.522Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:48.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.791Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:49.960Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:50.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:50.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:52.218Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8AZ60SJAMHV9AV9XQS4MJ4.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:31:52.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:52.629Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:52.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:52.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:54.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:56.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:56.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:56.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.119Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:57.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:31:57.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:58.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:58.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:58.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:58.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:58.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:58.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:59.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:59.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:59.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:59.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:31:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:00.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:00.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:00.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:01.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:01.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:01.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:01.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:03.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:03.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:03.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:04.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:05.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:05.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:06.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:06.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:06.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:06.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:07.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:08.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:08.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:08.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:08.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:08.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:10.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:10.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:11.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:11.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:12.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:12.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:12.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:13.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:13.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.094Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.196Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:14.386Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:14.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:14.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:14.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:15.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:16.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:16.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:17.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:17.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:17.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:18.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.705Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.907Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.915Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:19.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:20.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:20.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:21.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:22.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:22.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:24.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:24.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:24.727Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:24.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:26.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:26.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:27.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:27.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:28.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:28.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:28.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:29.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:29.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:30.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:30.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:30.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:31.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:31.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:34.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:34.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:34.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:35.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:35.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:35.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:36.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:36.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:36.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:36.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:37.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:38.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:38.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:40.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:40.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:40.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:40.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:42.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:42.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:42.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:42.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:43.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:43.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:43.992Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.196Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.307Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:44.434Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:44.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:46.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:46.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:47.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:47.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:47.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.452Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.615Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.782Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.790Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:49.946Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:50.179Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:51.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:51.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:52.218Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B10KTTH2B1ZVVDGYW0TCW.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:32:52.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:52.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:54.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:54.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:54.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:56.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:56.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:56.337Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:57.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:32:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:58.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:58.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:58.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:58.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:59.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:59.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:32:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:01.083Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:01.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:01.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:01.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:01.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.158Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:03.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:03.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:03.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:04.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:04.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:04.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:05.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:05.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:05.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:06.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:06.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:06.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:08.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:08.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:08.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:08.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:10.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:10.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:12.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:12.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:13.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.089Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.129Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.227Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:14.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.331Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:14.440Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:14.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:14.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:14.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:15.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:16.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:16.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:16.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:16.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:17.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:17.280Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:17.366Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:17.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.076Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.730Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.928Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:19.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.937Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:20.472Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:21.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:21.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:22.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:24.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:26.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:26.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:26.341Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:26.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:27.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:27.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:28.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:28.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:28.862Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:29.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:29.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:30.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:30.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:31.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:31.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:31.990Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.942Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:32.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:33.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:34.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:34.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:34.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:35.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:35.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:35.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:35.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:36.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:36.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:36.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:36.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:37.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:38.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:38.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:38.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:38.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:40.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:40.408Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:41.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:42.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:42.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:42.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:42.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.098Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:43.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:43.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.144Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:44.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:44.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:46.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:47.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:47.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.712Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.873Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.883Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:49.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:50.310Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:51.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:51.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:52.219Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B2V6VA0HVY0R99F0MH5HZ.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:33:52.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:52.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:54.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:56.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:56.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:33:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:57.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:58.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:58.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:58.637Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:58.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:59.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:33:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:00.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:00.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:01.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:01.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:02.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:02.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:02.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:02.778Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:02.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:02.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:03.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:04.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:04.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:04.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:04.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:05.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:05.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:05.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:06.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:06.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:06.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:07.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:08.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:08.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:08.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:08.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:08.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:08.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:08.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:09.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:09.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:09.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:09.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:10.006Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:10.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:10.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:11.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:11.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:12.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:12.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:12.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:12.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.093Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:13.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:13.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.234Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.334Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:14.436Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:14.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:15.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:15.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:16.118Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:16.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:17.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:17.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:17.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:17.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:17.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:18.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:20.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:20.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:20.571Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:21.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:22.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:22.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:23.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:24.483Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:24.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:26.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:26.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:26.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:27.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:27.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:27.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:28.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:28.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:28.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:29.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:29.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:30.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:31.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:31.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:31.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:32.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:32.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:32.734Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:32.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:33.982Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:34.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:35.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:35.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:36.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:36.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:38.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:38.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:38.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:40.103Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:40.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:41.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:42.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:42.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:42.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:43.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:43.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:43.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.009Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.136Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.380Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:44.478Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:44.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:44.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:44.860Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:46.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:47.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:47.229Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:47.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:48.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.549Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.753Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.761Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:49.949Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:50.216Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:51.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:52.220Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B4NSV11NWXHB0SXZJMJJY.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:34:52.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:52.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:54.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:54.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:56.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:56.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:56.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:56.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.710Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.778Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.779Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:34:57.780Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:58.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:58.602Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:58.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:59.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:59.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:59.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:34:59.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:00.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:00.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:01.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:01.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:01.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:01.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:02.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:03.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:03.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:03.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:03.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:04.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:04.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:04.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:05.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:06.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:06.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:07.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:08.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:08.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:08.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:10.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:10.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:10.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:10.985Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:11.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:12.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:12.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:12.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:12.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.085Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.358Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:13.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:13.951Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.090Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:14.410Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:14.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:14.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:14.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:15.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:16.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:16.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:16.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:16.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:17.029Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:17.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:17.248Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:17.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.511Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:18.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.658Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.823Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.831Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:19.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:20.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:21.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:21.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:22.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:22.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:23.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:24.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:26.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:27.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:28.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:28.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:28.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:29.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:29.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:30.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:30.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:31.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:31.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:31.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.736Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:32.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:33.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:33.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:33.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:34.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:34.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:34.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:34.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:34.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:35.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:35.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:36.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:36.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:36.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:36.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:37.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:37.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:38.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:38.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:38.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:40.107Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:40.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:40.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:41.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:42.156Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:42.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:42.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:42.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:43.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:43.954Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:43.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:43.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:44.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.133Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.381Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:44.515Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:44.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:44.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:44.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:45.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:46.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:46.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:47.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:47.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:47.353Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:48.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.879Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:49.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:50.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:50.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:50.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:51.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:52.220Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B6GCW97TGRP7R06CR139D.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:35:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:52.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:52.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:52.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:54.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:54.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:56.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:56.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:56.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.695Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.717Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:35:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:57.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:58.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:58.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:58.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:58.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:59.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:35:59.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:00.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:00.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:01.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:01.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:01.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:01.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:01.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:02.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:02.229Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:02.745Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:02.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:03.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:03.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:03.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:03.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:04.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:04.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:05.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:05.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:06.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:06.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:07.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:08.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:08.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:08.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:08.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:08.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:08.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:10.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:10.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:10.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:11.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:12.092Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:12.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:12.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:12.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.073Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:13.971Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.013Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.274Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:14.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.376Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:14.476Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:14.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:14.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:14.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:16.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:17.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:17.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:17.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:17.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:18.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.799Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:19.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:20.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:20.411Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:21.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:21.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:22.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:22.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:23.058Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:23.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:24.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:24.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:24.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:26.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:26.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:27.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.754Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:27.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:27.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:28.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:28.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:28.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:29.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:29.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:30.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:30.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:31.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:31.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:31.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:31.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:31.983Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:32.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:34.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:34.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:34.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:34.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:35.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:35.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:35.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:36.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:36.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:37.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:38.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:38.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:38.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:38.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:39.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:39.168Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:39.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:39.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:39.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:39.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:39.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:40.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:40.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:40.410Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:40.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:41.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:41.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:42.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:42.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:42.347Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:42.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:42.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.153Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:43.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:43.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:43.994Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.019Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:44.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.415Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:44.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:44.613Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:44.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:44.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:44.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:44.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:45.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:46.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:46.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:47.050Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:47.179Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:47.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:47.469Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:47.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:48.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:49.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:49.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:50.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:50.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:50.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:50.548Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:51.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:51.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:52.221Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8B8AZXXB6XW1DC2S9KKSP6.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:36:52.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:52.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:54.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:54.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:54.730Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:54.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:56.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:56.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.694Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:36:57.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:57.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:58.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:58.661Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:58.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:59.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:59.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:59.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:36:59.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:01.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:01.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:01.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:01.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:03.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:03.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:03.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:03.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:04.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:04.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:04.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:05.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:05.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:06.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:06.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:06.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:06.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:08.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:08.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:10.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:10.985Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:11.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:12.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:12.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.094Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:13.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:13.963Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.127Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.228Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:14.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.338Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:14.439Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:14.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:14.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:15.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:15.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:16.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:17.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:17.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:17.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.697Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.859Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.870Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:19.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:20.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:20.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:21.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:22.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:22.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:23.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:24.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:24.509Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:24.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:24.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:26.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:27.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:27.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:28.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:28.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:29.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:31.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:31.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:31.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:33.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:34.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:34.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:35.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:35.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:35.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:35.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:36.075Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:36.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:36.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:36.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:36.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:37.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:38.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:38.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:38.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:38.809Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:38.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:39.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:40.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:40.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:40.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:41.270Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:42.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:42.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:43.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:43.987Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.028Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:44.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:44.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.356Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:44.470Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:46.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:46.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:46.535Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:46.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:47.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:47.295Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:47.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:47.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:48.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.934Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:49.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:50.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:50.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:50.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:50.541Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:51.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:52.222Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BA5JYXBEZ1X7GS8P1N79R.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:37:52.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.606Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:52.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:52.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:54.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:54.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:54.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:54.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:56.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:56.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.745Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:37:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:58.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:58.631Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:58.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:58.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:59.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:37:59.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:00.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:00.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:00.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:01.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:01.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:01.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:01.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:01.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.150Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:03.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:03.405Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:03.954Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:04.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:04.589Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:04.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:05.521Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:05.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:05.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:05.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:06.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:06.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:06.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:06.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:07.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:07.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:08.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:08.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:08.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:10.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:10.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:10.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:10.989Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:11.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:12.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:12.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:12.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:12.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:13.682Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:13.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:13.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.073Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:14.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.319Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:14.421Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:14.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:16.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:16.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:17.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:17.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:17.432Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:17.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:17.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:18.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.186Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:19.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:20.201Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:20.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:20.436Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:20.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:20.926Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:21.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:21.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:22.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.620Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:22.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:24.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:24.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:24.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:24.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:24.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:24.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:24.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:26.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:26.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:26.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:26.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:26.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.678Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:27.730Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:27.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:28.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:28.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:28.733Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:28.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:29.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:29.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:29.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:30.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:30.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:31.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:31.491Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:31.491Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:31.491Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:31.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:31.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:32.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:33.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:33.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:34.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:34.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:34.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:35.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:35.835Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:35.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:35.992Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:36.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:36.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:36.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:37.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:38.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:38.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:38.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:38.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.763Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:40.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:40.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:40.729Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:41.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:42.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:42.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:42.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.133Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.662Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:43.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:43.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:43.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.017Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:44.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.356Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:44.474Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:44.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:45.265Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:46.113Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:46.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:47.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:47.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:48.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.761Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.934Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:49.943Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:50.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:50.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:51.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:51.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:52.223Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BC05ZQ0Z4A6VTEVEK7AQ5.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:38:52.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:52.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:52.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:53.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:54.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:54.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:54.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:54.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:56.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:56.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.688Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.697Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.719Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:38:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:58.051Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:58.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:58.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:58.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:58.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:38:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:00.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:00.523Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:01.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:01.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:01.626Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:01.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:01.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:01.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:02.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:03.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:03.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:03.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:04.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:04.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:04.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:04.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:04.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:05.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:05.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:05.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:06.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:06.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:06.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:07.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:07.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:08.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:08.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:08.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:10.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:10.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:10.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:11.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:12.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:12.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:12.409Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:12.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.351Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.656Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:13.990Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.106Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.141Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.251Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:14.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.370Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:14.474Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:14.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:14.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:15.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:16.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:16.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:17.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:17.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:17.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:17.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:17.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.838Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.847Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:20.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:20.265Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:21.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:22.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:22.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:22.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:24.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:26.231Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:26.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:26.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.048Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:27.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.701Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:27.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:27.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:28.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:28.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:28.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:29.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:29.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:29.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:29.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:30.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:30.517Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:31.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:31.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:31.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:31.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:32.981Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:33.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:33.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:34.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:35.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:35.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:36.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:36.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:36.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:37.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:37.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:38.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:38.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:38.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:38.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:38.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:38.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.782Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:40.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:40.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:40.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:42.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:42.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:42.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:43.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:43.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:43.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:43.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:43.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.104Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:44.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.222Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:44.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.342Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:44.466Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:44.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:44.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:44.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:45.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:46.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:47.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:47.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:47.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:47.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:47.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:48.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.939Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:50.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:50.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:50.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:50.568Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:52.225Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BDTS1TWCZ2F6DAQFFJYYJ.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:39:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:52.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:52.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:52.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:54.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:54.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:54.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:56.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:57.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.687Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.712Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:39:57.763Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:58.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:58.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:58.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:59.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:59.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:39:59.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:00.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:00.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:01.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:01.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:03.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:03.388Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:03.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:03.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:03.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:03.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:04.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:04.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:05.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:05.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:05.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:06.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:06.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:07.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:07.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:08.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:08.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:08.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:08.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:08.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:10.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:10.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:10.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:10.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:11.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:12.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:12.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:12.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:12.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.101Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:13.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:13.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.243Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:14.449Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:14.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:14.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:15.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:16.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:16.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:17.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:17.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:17.181Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:17.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:18.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.653Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.835Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.843Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:20.276Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:21.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:22.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:22.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:23.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:24.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:24.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:24.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:26.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:26.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:26.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:26.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:26.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.714Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.715Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:27.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:28.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:28.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:28.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:29.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:29.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:29.953Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:30.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:30.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:30.989Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:31.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:32.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:34.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:34.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:34.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:35.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:35.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:35.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:36.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:36.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:36.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:36.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:37.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:38.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:38.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:38.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:38.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:38.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:39.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:40.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:40.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:41.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:41.303Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:42.012Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:42.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.149Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:43.985Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.049Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.105Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.147Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.258Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:44.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:44.476Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:44.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:44.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:44.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:44.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:46.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:46.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:47.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:47.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:47.235Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:47.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.757Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.599Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.625Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.754Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.762Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:49.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:50.208Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:50.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:51.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:52.226Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BFNC2Q2KXAKJPNX75N0Y3.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:40:52.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.617Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:52.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:54.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:54.670Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:56.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:56.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:57.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.684Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.726Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:40:57.727Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:58.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:58.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:58.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:58.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:59.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:40:59.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:00.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:00.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:01.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:01.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:01.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:01.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:01.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:01.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:01.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:03.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:03.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:03.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:04.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:04.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:04.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:05.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:05.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:05.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:06.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:06.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:06.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:06.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:06.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:08.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:08.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:08.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:08.751Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:08.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:08.885Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:10.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:10.406Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:10.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:10.731Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:10.987Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:10.988Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:10.988Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:10.989Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:10.989Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:10.990Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:10.991Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:11.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:12.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:12.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.660Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:13.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:13.955Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:13.991Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.143Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.240Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:14.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:14.444Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:14.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:14.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:15.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:15.402Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:16.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:16.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:17.144Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:17.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:17.313Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:17.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:17.570Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.472Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.456Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.741Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.924Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.933Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:19.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:20.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:20.324Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:21.226Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:21.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:22.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:22.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:24.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:24.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:24.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:26.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:27.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:27.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:27.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:27.740Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:28.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:28.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:28.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:28.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:29.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:29.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:29.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:30.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:30.514Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:31.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:31.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:31.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:31.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:31.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:31.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:31.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:31.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.729Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:33.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:33.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:33.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:33.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:34.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:34.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:34.441Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:35.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:35.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:35.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:36.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:36.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:36.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:36.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:37.869Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:38.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:38.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:38.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:38.876Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:39.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:39.173Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:39.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:39.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:39.758Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:40.009Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:40.068Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:40.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:40.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:40.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:41.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:41.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:42.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:42.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:42.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:42.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:42.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.088Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:43.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:43.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:43.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:43.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.058Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.098Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.128Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.231Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:44.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.349Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:44.488Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:44.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:44.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:44.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:45.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:46.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:47.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:47.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:47.180Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:47.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:47.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:47.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.510Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:48.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.649Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.810Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.818Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:50.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:51.228Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:51.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:52.227Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BHFZ3TJYYY26MDDGWMH17.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:41:52.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:52.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:54.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:54.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:54.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:54.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:56.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:56.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:41:57.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:57.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:58.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:59.079Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:59.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:59.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:41:59.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:00.495Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:01.048Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:01.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:01.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:01.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:01.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.264Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:03.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:03.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:03.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:03.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:04.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:05.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:05.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:05.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:06.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:06.518Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:06.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:06.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:07.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:08.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:08.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:08.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:08.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.767Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:10.056Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:10.417Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:10.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:10.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:11.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:11.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:12.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:12.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.057Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:13.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:13.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.072Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.150Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.256Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:14.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.363Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:14.478Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:14.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:14.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:15.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:16.111Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:16.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:16.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:17.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:17.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:17.269Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:17.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:17.563Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.512Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:18.948Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.798Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:19.975Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:20.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:20.471Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:21.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:21.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:22.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:22.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:24.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:24.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:26.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:26.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:26.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:27.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.692Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.755Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.756Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:27.757Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:28.002Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:28.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:28.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:28.672Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:28.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:29.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:29.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:29.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:30.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:30.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:31.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:31.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:32.551Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:32.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:33.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:33.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:34.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:34.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:35.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:35.824Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:36.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:36.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:36.324Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:36.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:36.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:36.991Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:37.870Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:38.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:38.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:38.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:38.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:40.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:40.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:40.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:41.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:42.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:42.419Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.170Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.350Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.644Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.057Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.125Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.238Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:44.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:44.446Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:44.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:44.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:44.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:44.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:44.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:45.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:45.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:45.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:46.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:46.461Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:46.462Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:46.552Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:46.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:46.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:46.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:47.000Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:47.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:47.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:47.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:47.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:47.629Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:47.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:48.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.599Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:49.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:50.083Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:50.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:50.241Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:50.250Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:50.690Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:51.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:52.228Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BKAJ4E85V3R65ARJMJFCH.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:42:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:52.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.599Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.601Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.627Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:52.628Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:52.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:53.245Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:54.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:54.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:54.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:54.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:54.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:56.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:56.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:57.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:42:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:58.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:58.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:58.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:58.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:59.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:42:59.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:00.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:00.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:01.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:01.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:01.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:01.972Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.190Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:02.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:03.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:03.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:03.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:03.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:04.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:04.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:04.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:04.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:04.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:04.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:05.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:05.833Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:06.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:06.330Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:06.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:06.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:07.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:08.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:08.442Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:08.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:08.805Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:08.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:10.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:10.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:11.296Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:12.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:12.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:12.339Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.649Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:13.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:13.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:13.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.031Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.099Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.132Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.233Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:14.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.336Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:14.448Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:14.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:15.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:15.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:16.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:16.121Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:16.464Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:16.465Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:16.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:17.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:17.182Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:17.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:17.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:17.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:18.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.502Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.607Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.758Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.767Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:19.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:20.209Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:20.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:21.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:22.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:22.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:23.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:24.550Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:24.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:24.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:26.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:26.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.704Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:27.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.760Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.761Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:27.762Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:28.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:28.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:28.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:28.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:28.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:28.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:29.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:29.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:29.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:29.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:29.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:30.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:30.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:31.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:31.610Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:31.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.560Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:32.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:34.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:34.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:35.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:35.820Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:35.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:36.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:36.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:36.899Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:36.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:37.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:38.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:38.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:38.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:38.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:38.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:38.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.737Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:39.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:40.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:40.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:40.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:41.299Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:41.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:42.327Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:42.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:42.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:43.978Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:43.997Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.064Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.067Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.107Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:44.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.138Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.236Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.382Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:44.488Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:44.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:44.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:44.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:45.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:45.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:46.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:46.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:46.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:46.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:47.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:47.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:47.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:47.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:48.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.124Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.461Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.679Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.856Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.865Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:50.054Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:50.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:50.340Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:51.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:51.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:52.229Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BN55515B8XYZ4F97SYTEW.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:43:52.554Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.596Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:52.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:52.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:54.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:54.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:54.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:54.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:56.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:56.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:56.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.703Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.716Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:43:57.759Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:58.055Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:58.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:58.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:58.639Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:58.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:59.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:59.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:43:59.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:00.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:00.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:00.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:01.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:01.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:01.490Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:01.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:01.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:01.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:01.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:02.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:03.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:03.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:03.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:03.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:04.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:05.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:05.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:05.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:06.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:06.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:06.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:07.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:08.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:08.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:08.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:08.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:08.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:08.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:08.834Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.165Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.762Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:10.114Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:10.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:10.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:12.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:12.301Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:12.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:12.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:12.707Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.354Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:13.982Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.061Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.166Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.285Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.381Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:14.484Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:14.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:14.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:15.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:15.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:16.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:16.462Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:16.463Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:16.556Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:16.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:16.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:17.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:17.185Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:17.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:17.546Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:17.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:17.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.455Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:20.093Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:20.266Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:20.278Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:20.726Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:21.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:21.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:22.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:22.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:23.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:24.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:24.568Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:24.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:26.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:26.249Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:26.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:26.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:27.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.655Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.738Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:27.739Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:28.081Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:28.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:28.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:28.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:29.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:29.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:29.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:29.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:30.263Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:30.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:31.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:31.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:31.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:31.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.392Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:33.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:34.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:34.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:34.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:35.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:35.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:35.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:35.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:36.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:36.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:36.895Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:36.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:37.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:38.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:38.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:38.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:38.804Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:38.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:40.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:40.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:40.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:41.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:42.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:42.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:42.345Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:42.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:43.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.084Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.213Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.317Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:44.419Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:44.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:44.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:45.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:45.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:45.404Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:46.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:46.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:46.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:46.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:47.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:47.178Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:47.211Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:47.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:47.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:47.571Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.493Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:48.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.457Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.844Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.856Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:50.261Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:51.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:52.230Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BPZR6S0K2QSHKV6J0YV42.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:44:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:52.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.598Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:52.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:52.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:52.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:54.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:54.549Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:54.555Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:54.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:54.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:56.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:56.232Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:56.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:56.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:56.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:57.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.629Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.630Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.631Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.690Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.731Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:44:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:57.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:58.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:58.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:58.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:58.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:58.810Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:59.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:59.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:59.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:59.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:44:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:00.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:00.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:00.582Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:01.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:01.116Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:01.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:01.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:01.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.770Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.918Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:02.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:03.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:03.482Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:03.886Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:03.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:03.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:04.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:04.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:04.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:04.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:05.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:05.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:05.830Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:05.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:06.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:06.333Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:06.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:06.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:07.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:08.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:08.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:08.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:08.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:09.995Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:10.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:10.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:10.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:11.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:12.174Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:12.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:12.361Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:12.655Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.353Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:13.679Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:13.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:13.967Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:13.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.000Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.035Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.075Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.220Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:14.282Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.369Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:14.499Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:14.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:14.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:14.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:15.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:15.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:15.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:16.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:16.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:16.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:16.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:17.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:17.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:17.379Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:17.397Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:17.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:17.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.607Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:18.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.447Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.846Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:19.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:20.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:20.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:20.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:20.462Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:22.551Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.607Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:22.608Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:22.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:23.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:24.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:24.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:24.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:24.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:26.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:26.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:26.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:26.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:27.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.675Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:27.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:28.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:28.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:28.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:28.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:29.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:29.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:29.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:30.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:31.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:31.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:31.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:31.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:31.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:31.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.679Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:33.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:33.477Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:33.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:33.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:33.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:34.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:34.235Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:34.697Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:34.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:35.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:35.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:36.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:36.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:36.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:37.616Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:38.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:38.370Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:38.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:38.801Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:38.864Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.720Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:40.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:40.425Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:40.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:40.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:41.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:42.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:42.331Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:42.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.100Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:43.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:43.964Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.010Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.023Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.197Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:44.277Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.303Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:44.421Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:44.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:44.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:44.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:45.255Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:45.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:46.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:46.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:46.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:46.903Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:46.998Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:47.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:47.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:47.347Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:47.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:47.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.028Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:48.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.129Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.869Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:49.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:50.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:50.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:50.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:50.613Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:51.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:51.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:52.231Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BRTB7YFPV7XK1WFP1TJ8X.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:45:52.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:52.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:52.795Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:52.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:53.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:54.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:54.546Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:54.553Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:54.606Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:54.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:54.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:56.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:56.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:56.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.682Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.699Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.720Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:45:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:58.004Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:58.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:58.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:58.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:58.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:58.840Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:59.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:59.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:59.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:45:59.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:00.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:00.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:01.010Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:01.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:01.967Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:02.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:03.290Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:03.383Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:03.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:03.986Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:04.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:04.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:04.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:04.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:05.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:05.828Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:05.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:05.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:06.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:06.928Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:07.618Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:07.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:08.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:08.538Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:08.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:08.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:08.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:08.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.119Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.723Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.765Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:10.130Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:10.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:10.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:11.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:11.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:12.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:12.366Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:12.382Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:12.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:12.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.104Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.164Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.348Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.789Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:13.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:13.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:13.962Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:13.980Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:13.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.007Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.016Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.041Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.043Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.082Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.221Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:14.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.325Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:14.438Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:14.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:14.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:15.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:15.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:16.137Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:16.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:16.825Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:16.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:16.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:17.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:17.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:17.320Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:17.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:17.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.478Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.501Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.771Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.936Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:19.944Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:19.944Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:20.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:20.368Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:21.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:21.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:22.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.618Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:22.619Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:22.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:23.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:23.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:24.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:24.543Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:24.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:24.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:26.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:26.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:26.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:27.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.681Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.691Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:27.724Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:28.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:28.378Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:28.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:28.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:28.815Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:29.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:29.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:29.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:29.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:30.247Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:30.519Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:31.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:31.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:31.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:31.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:32.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:33.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:33.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:33.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:33.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:34.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:34.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:34.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:34.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:35.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:35.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:35.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:36.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:36.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:36.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:37.665Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:38.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:38.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:38.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:39.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:39.166Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:39.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:39.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:39.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:40.007Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:40.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:40.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:40.600Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:40.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:41.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:41.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:42.152Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:42.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:42.336Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:42.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.127Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.159Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:43.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:43.965Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.025Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.033Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.045Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.086Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:44.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.214Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:44.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.343Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:44.447Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:44.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:44.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:45.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:45.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:45.394Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:46.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:46.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:46.997Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:47.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:47.184Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:47.188Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:47.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:47.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.321Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.572Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.624Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.904Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:48.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.067Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.139Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.328Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.594Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.859Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:49.941Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:49.942Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:50.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:50.021Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:50.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:50.393Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:51.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:51.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:52.231Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BTMY7SAS8WN12RDDDGEXD.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:46:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:52.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.621Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:52.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:52.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:53.241Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:54.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:54.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:56.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:56.252Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:56.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:56.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:56.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:57.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.677Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.721Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:46:57.723Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:58.001Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:58.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:58.372Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:58.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:58.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:58.812Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:59.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:59.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:59.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:46:59.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:00.250Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:00.526Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:00.575Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:01.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:01.049Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:01.641Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:01.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:01.950Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.146Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.703Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:03.108Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:03.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:03.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:03.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:04.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:04.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:04.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:05.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:05.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:05.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:05.979Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:06.257Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:06.901Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:06.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:07.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:07.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:08.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:08.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:08.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:08.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:08.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:08.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.719Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.760Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.802Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:09.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:10.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:10.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:11.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:11.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:12.238Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:12.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.342Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:13.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:13.953Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:13.976Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:13.995Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.015Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.047Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.055Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.059Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.102Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.135Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.242Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:14.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.346Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:14.454Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:14.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:14.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:14.829Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:15.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:15.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:16.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:16.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:16.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:16.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:17.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:17.168Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:17.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:17.567Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.020Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.140Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.512Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.513Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.514Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.880Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:19.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:20.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:20.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:20.522Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:21.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:21.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:22.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.609Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:22.610Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:22.811Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:23.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:24.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:24.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:26.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:26.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:26.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:26.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:26.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:27.440Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.622Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.713Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:27.726Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.729Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.773Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.774Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:27.776Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:28.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:28.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:28.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:28.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:28.882Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:29.040Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:29.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:29.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:30.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:30.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:31.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:31.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:31.486Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:31.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:31.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:31.925Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:31.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:31.978Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:32.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:32.233Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:32.771Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:32.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:32.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:32.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.502Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.619Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.932Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:34.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:34.224Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:34.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:34.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:34.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:35.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:35.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:35.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:35.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:36.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:36.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:36.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:36.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:36.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:36.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:37.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:37.868Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:38.208Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:38.362Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:38.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:38.581Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:38.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:38.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.160Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.761Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:40.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:40.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:40.724Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:40.990Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:41.278Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:41.297Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:42.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:42.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:42.291Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:42.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.172Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.356Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.647Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:43.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:43.950Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:43.983Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.008Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.040Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.053Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.085Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.091Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.095Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:44.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.139Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.171Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:44.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.279Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.419Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:44.538Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:44.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:44.704Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:44.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:44.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:45.262Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:45.320Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:46.537Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:46.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:46.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:47.030Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:47.263Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:47.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.605Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:48.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.314Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.831Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:49.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:50.002Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:50.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:50.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:50.420Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:51.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:51.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:52.233Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BWFH81FERS7X0VQC5NPK2.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:47:52.564Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:52.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:52.818Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:54.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:54.496Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:54.552Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:54.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:54.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:54.716Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:54.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:56.227Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:56.261Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:56.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:56.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:56.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.051Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:57.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.668Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.671Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.686Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.698Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:47:57.749Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:58.071Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:58.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:58.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:58.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:59.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:59.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:47:59.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:00.253Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:00.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:00.578Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:01.039Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:01.137Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:01.368Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:01.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:01.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:01.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:01.980Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.148Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:02.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:02.963Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:03.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:03.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:03.389Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:03.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:03.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:03.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:04.015Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:04.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:04.197Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:04.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:04.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:04.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:04.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:05.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:05.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:05.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:05.975Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:06.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:06.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:06.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:07.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:07.634Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:08.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:08.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:08.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:08.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:08.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:08.871Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.749Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.779Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:10.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:10.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:10.590Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:10.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:10.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:10.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:11.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:11.302Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:12.091Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:12.269Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:12.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:12.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.117Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.145Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.343Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.650Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.788Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:13.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:13.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:13.999Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.024Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.036Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.046Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.054Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.062Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.068Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.114Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.151Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:14.275Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.361Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:14.464Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:14.708Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:14.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:14.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:15.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:15.400Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:16.040Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:16.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:16.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:16.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:17.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:17.176Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:17.181Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:17.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:18.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.313Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.469Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.484Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.927Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:18.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.571Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.745Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.757Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.853Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:20.186Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:20.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:21.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:21.677Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:22.539Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:22.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:22.817Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:23.063Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:24.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:24.504Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:24.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:24.537Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:24.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:24.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:24.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:24.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:26.006Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:26.007Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:26.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:26.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:26.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:26.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:26.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.615Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.708Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.722Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.758Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:27.759Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:28.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:28.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:28.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:28.640Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:28.676Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:28.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:28.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:29.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:29.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:29.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:29.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:30.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:30.506Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:30.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:31.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:31.065Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:31.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:31.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:31.930Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:31.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.141Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.772Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.132Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.239Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:33.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:34.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:34.437Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:34.750Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:35.468Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:35.819Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:35.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:35.984Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:36.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:36.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:36.334Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:36.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:36.898Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:36.977Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:37.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:38.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:38.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:38.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:38.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:38.807Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:38.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:39.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:39.422Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:39.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:39.725Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:39.766Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:39.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:40.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:40.128Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:40.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:40.598Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:40.723Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:40.957Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:40.980Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:41.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:42.102Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:42.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:42.292Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:42.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.220Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.237Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.360Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.431Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.785Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:43.874Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:43.946Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:43.973Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:43.998Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.029Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.056Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.060Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.110Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.257Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:44.279Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.364Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:44.473Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:44.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:44.673Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:44.685Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:44.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:45.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:45.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:45.401Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:46.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:46.460Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:47.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:47.188Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:47.275Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:47.604Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.026Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.304Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.454Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:48.988Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.234Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.474Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.864Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:50.018Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:50.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:50.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:50.455Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:51.271Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:51.690Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:52.233Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8BYA49YYSGNH6VC3PY8YKQ.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:48:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:52.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.588Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.589Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.602Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.625Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:52.626Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:52.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:53.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:54.471Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:54.486Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:54.513Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:54.541Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:54.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:54.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:54.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:56.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:56.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:56.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:57.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.665Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.676Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.711Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:48:57.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:58.008Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:58.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:58.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:58.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:58.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:59.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:59.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:59.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:59.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:48:59.934Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:00.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:00.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:00.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:01.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:01.060Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:01.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:01.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:01.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:01.947Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:01.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:01.976Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.921Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:02.968Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:03.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:03.487Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:03.489Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:03.884Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:03.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:03.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:04.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:04.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:04.439Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:04.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:04.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:04.861Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:05.459Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:05.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:05.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:05.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:05.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:06.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:06.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:06.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:07.643Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:07.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:08.210Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:08.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:08.623Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:08.806Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:08.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.169Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.764Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:09.999Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:10.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:10.403Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:10.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:10.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:10.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:11.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:11.298Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:12.142Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:12.283Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:12.287Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:12.658Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:12.709Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.046Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.352Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.654Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:13.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:13.855Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:13.952Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:13.986Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.038Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.066Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.081Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.101Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.109Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.112Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.153Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.186Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:14.293Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.419Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:14.541Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:14.614Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:14.686Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:14.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:14.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:14.873Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:15.332Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:15.396Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:16.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:16.115Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:16.791Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:16.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:16.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:17.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:17.187Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:17.379Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:17.387Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:17.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:17.715Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.023Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.315Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.467Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.488Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.500Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.687Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:18.923Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.041Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.509Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.510Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.511Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.927Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:19.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:20.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:20.130Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:20.204Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:20.552Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:21.207Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:21.688Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:22.544Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.594Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.615Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:22.616Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:22.831Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:23.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:23.239Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:24.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:24.480Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:24.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:24.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:24.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:24.718Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:26.211Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:26.259Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:26.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:26.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:26.896Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.115Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.662Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.679Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.689Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.700Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.709Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:27.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:28.000Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:28.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:28.562Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:28.632Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:28.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:28.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:29.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:29.528Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:29.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:29.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:29.940Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:30.242Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:30.515Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:31.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:31.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:31.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:31.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:31.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:31.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.544Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.773Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:32.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.059Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.475Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.922Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:34.016Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:34.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:34.195Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:34.433Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:34.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:34.848Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:35.465Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:35.822Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:35.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:35.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:36.198Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:36.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:36.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:36.962Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:37.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:37.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:38.163Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:38.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:38.532Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:38.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:38.797Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:38.856Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.113Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.423Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:39.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:40.109Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:40.412Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:40.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:40.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:40.980Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:41.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:41.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:42.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:42.266Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:42.357Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:42.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.143Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.344Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:43.681Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.786Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:43.858Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:43.948Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:43.966Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:43.984Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.003Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.012Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.020Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.026Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.034Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.039Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.087Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.119Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.215Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:44.276Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.323Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:44.447Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:44.668Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:44.816Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:44.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:45.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:46.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:46.784Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:46.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:46.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:46.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:47.027Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:47.180Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:47.287Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:47.376Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:47.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:47.712Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.019Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.305Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.312Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.458Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.473Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.491Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.595Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.752Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.875Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:48.917Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.508Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.700Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.854Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.858Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.867Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:49.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:49.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:50.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:50.249Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:51.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:51.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.012Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:52.234Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C04QAHA645TNSMTSQ7Y8C.tmp-for-creation: no space left on device" level=warn ts=2022-10-13T09:49:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:52.574Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.590Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.591Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.611Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:52.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:52.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:52.865Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:53.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:54.479Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:54.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:54.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:54.542Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:54.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:54.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:56.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:56.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:56.309Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:56.490Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:56.893Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:57.430Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.657Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.659Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.667Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.707Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:49:57.748Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:58.017Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:58.205Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:58.579Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:58.633Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:58.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:58.883Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:59.042Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:59.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:59.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:59.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:49:59.971Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:00.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:00.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:00.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:01.047Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:01.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:01.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:01.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:01.890Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:01.943Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:01.960Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:01.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.176Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.557Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.710Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.781Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.929Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:02.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:03.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:03.243Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:03.384Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:03.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:03.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:03.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:03.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:04.021Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:04.256Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:04.301Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:04.432Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:04.695Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:04.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:05.516Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:05.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:05.826Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:05.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:06.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:06.335Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:06.702Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:06.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:07.064Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:07.629Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:08.230Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:08.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:08.559Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:08.630Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:08.642Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/0 target=https://10.128.22.89:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:08.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:08.799Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:08.842Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.115Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.769Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.796Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:09.998Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:10.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:10.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:10.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:10.726Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:10.956Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:10.983Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:11.286Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:11.294Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:12.274Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:12.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:12.365Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:12.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:12.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.077Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.155Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.349Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:13.866Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:13.956Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:13.981Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.027Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.050Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.063Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.074Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.077Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.080Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.149Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.262Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:14.281Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.359Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:14.452Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:14.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:14.694Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:14.814Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:14.844Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:15.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:15.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:15.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:16.117Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:16.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:16.880Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:16.900Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:16.993Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:17.044Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:17.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:17.264Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:17.381Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:17.584Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:17.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.025Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.601Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.753Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:18.920Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.462Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.533Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.592Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.727Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.890Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.898Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:19.936Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:19.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:20.215Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:20.341Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:21.272Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:21.682Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.592Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.612Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:22.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:22.638Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:22.836Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:23.061Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:23.692Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:24.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:24.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:24.547Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:24.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:24.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:24.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:24.915Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:26.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:26.240Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:26.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:26.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:26.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:26.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:27.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.621Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.674Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.685Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.696Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.705Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.735Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.736Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:27.737Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:28.154Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:28.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:28.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:28.636Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:28.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:28.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:28.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:29.034Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:29.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:29.857Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:29.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:29.945Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:30.251Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:30.505Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:30.576Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:30.586Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:31.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:31.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:31.621Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:31.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:31.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:31.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:31.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:32.131Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:32.225Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:32.390Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:32.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:32.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:32.777Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:32.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:32.961Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.013Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.193Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:33.970Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:34.014Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:34.199Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:34.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:34.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:34.446Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:34.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:34.699Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:34.846Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:35.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:35.612Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:35.841Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:35.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:35.969Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:36.201Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:36.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:36.894Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:36.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:37.653Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:38.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:38.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:38.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:38.561Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:38.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:38.863Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.118Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.426Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.724Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.759Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.794Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:39.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:40.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:40.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:40.596Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:40.721Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:40.974Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:40.982Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:40.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:41.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:41.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:42.080Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:42.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:42.652Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:42.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:43.123Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:43.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:43.151Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:43.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:43.438Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:43.646Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:43.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:43.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:43.988Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.006Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.014Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.100Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:44.122Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.164Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:44.288Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.311Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.493Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:44.597Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:44.681Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:44.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:44.821Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:44.849Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:45.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:45.319Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:45.395Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:46.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:46.116Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:46.538Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:46.783Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:46.902Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:46.912Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:47.031Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:47.175Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:47.212Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:47.374Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:47.573Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.024Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:48.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.306Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.340Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.450Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.463Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.609Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.872Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.033Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.214Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.507Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.588Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.637Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.810Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.821Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.852Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:49.937Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:49.940Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:50.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:50.312Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:51.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:51.678Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.013Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:52.235Z caller=db.go:826 component=tsdb msg="compaction failed" err="compact head: persist head block: mkdir /prometheus/01GF8C1ZABCJH806HDW2HDBA8G.tmp-for-creation: no space left on device" level=error ts=2022-10-13T09:50:52.556Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.565Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.566Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.567Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.578Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.579Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.584Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.593Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.595Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.613Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:52.614Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:52.792Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-node-tuning-operator/node-tuning-operator/0 target=https://10.128.33.187:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:52.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:53.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:53.691Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:54.464Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:54.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:54.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:54.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:54.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:54.545Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:54.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:54.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:54.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-controller/0 target=http://10.196.3.178:9654/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:56.004Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:56.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:56.326Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:56.499Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:56.889Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:57.436Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.660Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.663Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.673Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.683Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.702Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.732Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.733Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:50:57.734Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:58.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:58.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:58.373Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:58.622Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:58.651Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:58.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:58.823Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:59.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:59.531Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:59.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:59.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:59.939Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:50:59.952Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:00.177Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:00.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:00.503Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:00.577Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:00.587Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:01.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:01.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:01.135Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:01.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:01.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:01.608Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:01.888Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:01.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:01.959Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:01.964Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:02.138Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:02.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:02.391Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:02.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:02.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:02.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:02.774Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:02.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:02.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:03.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:03.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:03.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:03.386Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:03.481Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:03.881Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:03.909Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:03.913Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:04.018Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:04.074Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:04.194Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:04.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:04.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:04.300Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:04.434Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:04.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:04.705Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:04.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:05.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:05.613Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:05.837Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:05.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:05.997Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:06.254Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:06.325Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:06.701Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:06.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:06.906Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:07.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:07.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:08.203Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:08.494Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:08.536Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:08.558Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:08.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:08.803Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:08.914Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.424Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.722Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.768Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.798Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:09.996Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:10.126Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:10.411Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:10.593Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:10.722Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:10.955Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:10.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:10.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:10.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:10.984Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:10.984Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:11.289Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:11.300Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:12.295Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:12.393Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:12.591Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:12.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:12.698Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.097Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.125Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.219Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.346Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.427Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.648Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:13.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.793Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:13.877Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:13.949Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:13.972Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:13.996Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.032Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.042Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.052Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.065Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.071Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.076Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.124Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.158Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:14.285Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.309Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.440Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:14.586Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:14.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:14.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:14.731Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:14.847Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:14.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:15.273Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:15.322Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:15.398Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:16.039Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:16.086Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:16.123Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:16.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:16.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:16.808Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:16.892Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:16.916Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:16.995Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:17.038Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:17.213Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:17.291Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:17.375Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:17.583Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:17.713Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.212Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:18.261Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.307Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.318Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.451Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.476Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.492Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.675Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.878Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:18.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.035Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.147Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.209Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.308Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/catalog-operator/0 target=https://10.128.93.117:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.453Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-machine-approver/cluster-machine-approver/0 target=https://10.196.0.105:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.503Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.504Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.505Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.506Z caller=manager.go:625 component="rule manager" group=openshift-ingress.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.620Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-storage-operator/cluster-storage-operator/0 target=https://10.128.52.71:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.666Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.833Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.842Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.851Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.169:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:19.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.148:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:19.941Z caller=manager.go:625 component="rule manager" group=cluster-network-operator-kuryr.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:20.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.187:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:20.217Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:21.246Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:21.689Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/cluster-monitoring-operator/0 target=https://10.128.23.49:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.014Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.015Z caller=manager.go:625 component="rule manager" group=kubernetes-storage msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:22.548Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-service-ca-operator/service-ca-operator/0 target=https://10.128.56.252:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.568Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.569Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.570Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.571Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.572Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.573Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.574Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.575Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.576Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.577Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.580Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.581Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.582Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.583Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.585Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.586Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.587Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.597Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.600Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.622Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.623Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:22.624Z caller=manager.go:625 component="rule manager" group=openshift-kubernetes.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:22.827Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication-operator/authentication-operator/0 target=https://10.128.74.228:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:23.062Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/1 target=https://10.128.22.45:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:23.240Z caller=manager.go:625 component="rule manager" group=kubernetes-system-apiserver msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:23.693Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.72:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:24.470Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.0.105:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:24.508Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.169:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:24.510Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:24.511Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:24.512Z caller=manager.go:625 component="rule manager" group=kube-prometheus-node-recording.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:24.535Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry/0 target=https://10.128.83.90:5000/extensions/v2/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:24.540Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.0.105:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:24.603Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:24.714Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:24.919Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.187:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:26.005Z caller=manager.go:625 component="rule manager" group=multus-admission-controller-monitor-service.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:26.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-insights/insights-operator/0 target=https://10.128.29.145:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:26.236Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.232:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:26.310Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns-operator/dns-operator/0 target=https://10.128.37.87:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:26.498Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.199:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:26.905Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.049Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.050Z caller=manager.go:625 component="rule manager" group=node.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.114Z caller=manager.go:625 component="rule manager" group=prometheus msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:27.435Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.3.178:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.616Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.617Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.618Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.619Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.620Z caller=manager.go:625 component="rule manager" group=openshift-monitoring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.656Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.658Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.661Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.664Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.666Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.680Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.693Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.706Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.718Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:27.721Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/openshift-state-metrics/1 target=https://10.128.22.89:9443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.746Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:27.747Z caller=manager.go:625 component="rule manager" group=k8s.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:27.993Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.0.105:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:28.223Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:28.377Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.52:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:28.635Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.199:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:28.645Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.187:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:28.681Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:28.780Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.178:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:29.045Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:29.534Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:29.843Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:29.910Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.120.187:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:29.931Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.0.105:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:30.248Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/2 target=https://10.128.44.154:8444/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:30.497Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver/0 target=https://10.128.121.9:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:30.580Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.0.105:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:30.588Z caller=manager.go:625 component="rule manager" group=telemeter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:31.043Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.18:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:31.136Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.2.72:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:31.221Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver/kube-apiserver/0 target=https://10.196.3.187:6443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:31.487Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:31.488Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:31.489Z caller=manager.go:625 component="rule manager" group=openshift-etcd-telemetry.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:31.615Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-version/cluster-version-operator/0 target=https://10.196.3.187:9099/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:31.897Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.23:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:31.926Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.187:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:31.965Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.2.169:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:31.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.108:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.157Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager-operator/kube-controller-manager-operator/0 target=https://10.128.25.14:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.222Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/0 target=https://10.128.44.154:8441/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.407Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.3.178:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.545Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.546Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.547Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.548Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.549Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:32.550Z caller=manager.go:625 component="rule manager" group=node-exporter.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.565Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.169:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.683Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-apiserver-operator/kube-apiserver-operator/0 target=https://10.128.87.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.775Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.19:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.0.105:10257/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-controllers/1 target=https://10.128.44.154:8442/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:32.966Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.127.168:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.241Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.199:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.244Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.0.105:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.178:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.509Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cloud-credential-operator/cloud-credential-operator/0 target=https://10.128.62.5:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.879Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.111.48:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.908Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/cluster-autoscaler-operator/0 target=https://10.128.45.39:9192/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.935Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:33.973Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-image-registry/image-registry-operator/0 target=https://10.128.83.151:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:34.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.114:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:34.200Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.199:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:34.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.187:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:34.298Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:34.299Z caller=manager.go:625 component="rule manager" group=kubelet.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:34.448Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.3.178:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:34.586Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:34.696Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.3.178:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:34.859Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.3.178:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:35.466Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.187:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:35.617Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.3.178:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:35.839Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-config-operator/config-operator/0 target=https://10.128.73.213:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:35.924Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.178:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:35.985Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.55:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:36.217Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.105:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:36.258Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://10.128.22.45:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:36.323Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.3.187:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:36.700Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.199:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:36.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:36.938Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.105:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:37.627Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler-operator/kube-scheduler-operator/0 target=https://10.128.12.37:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:37.867Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.3.187:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:38.202Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:38.245Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.178:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:38.530Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/2 target=https://10.196.3.178:9204/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:38.566Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-api/machine-api-operator/0 target=https://10.128.44.42:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:38.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:38.800Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress/router-default/0 target=https://10.196.0.199:1936/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:38.838Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.139:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:39.036Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:39.112Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.161:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:39.183Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/1 target=https://10.196.0.105:9203/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:39.429Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.73:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:39.529Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.157:8443/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:39.717Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.72:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:39.755Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.120.232:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:39.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-network-diagnostics/network-check-source/0 target=https://10.128.103.204:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:40.003Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.35:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:40.111Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.0.199:10250/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:40.399Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.3.178:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:40.585Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-k8s/0 target=https://10.128.23.35:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:40.725Z caller=manager.go:625 component="rule manager" group=kubernetes-system-kubelet msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:40.958Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.23.138:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:40.981Z caller=manager.go:625 component="rule manager" group=openshift-sre.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:40.981Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:40.982Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:40.983Z caller=manager.go:625 component="rule manager" group=kube-scheduler.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:41.268Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-console-operator/console-operator/0 target=https://10.128.133.246:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:41.317Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-dns/dns-default/0 target=https://10.128.126.114:9154/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:42.162Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.3.187:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:42.267Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver-operator/openshift-apiserver-operator/0 target=https://10.128.97.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:42.355Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.72:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:42.657Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-multus-admission-controller/0 target=https://10.128.34.59:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:42.706Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.2.72:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.066Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/1 target=https://10.196.2.169:10250/metrics/cadvisor msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.120Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.82:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.161Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.190:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.218Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/3 target=http://10.196.0.105:9537/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.359Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-ingress-operator/ingress-operator/0 target=https://10.128.59.173:9393/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.428Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/alertmanager/0 target=https://10.128.22.112:9095/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.659Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.187:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:43.680Z caller=manager.go:625 component="rule manager" group=apiserver-requests-in-flight msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.787Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.178:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:43.850Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/0 target=https://10.196.2.72:10250/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:43.945Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:43.969Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:43.989Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.011Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.022Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.030Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.037Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.044Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.048Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.051Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.088Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:44.110Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-samples-operator/cluster-samples-operator/0 target=https://10.128.27.226:60000/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.120Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.219Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:44.284Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-machine-config-operator/machine-config-daemon/0 target=https://10.196.0.105:9001/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.329Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:44.454Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:44.587Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.247:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:44.671Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-etcd-operator/etcd-operator/0 target=https://10.128.40.74:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:44.684Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.169:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:44.813Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.0.105:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:44.845Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.0.199:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:45.260Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/0 target=https://10.196.0.105:9202/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:45.329Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/telemeter-client/0 target=https://10.128.22.239:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:45.397Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-operator-lifecycle-manager/olm-operator/0 target=https://10.128.92.123:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:46.038Z caller=manager.go:625 component="rule manager" group=kubernetes-recurring.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:46.085Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:46.122Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:46.458Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:46.459Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:46.536Z caller=manager.go:625 component="rule manager" group=general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:46.790Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/0 target=https://10.196.3.178:10259/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:46.891Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.135:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:46.907Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-adapter/0 target=https://10.128.23.77:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:46.994Z caller=manager.go:625 component="rule manager" group=kube-prometheus-general.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:47.037Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/prometheus-operator/0 target=https://10.128.22.177:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:47.196Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-apiserver/openshift-apiserver-operator-check-endpoints/0 target=https://10.128.121.9:17698/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:47.314Z caller=manager.go:625 component="rule manager" group=kube-apiserver.rules msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:47.385Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-cluster-csi-drivers/openstack-cinder-csi-driver-controller-monitor/3 target=https://10.196.0.105:9205/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:47.569Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.178:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:47.711Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kubelet/2 target=https://10.196.2.72:10250/metrics/probes msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.022Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.196.3.187:10257/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.206Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.92:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=warn ts=2022-10-13T09:51:48.260Z caller=manager.go:625 component="rule manager" group=cluster-version msg="Rule sample appending failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.311Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager/openshift-controller-manager/0 target=https://10.128.110.159:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.316Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-querier/0 target=https://10.128.23.183:9091/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.449Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-marketplace/marketplace-operator/0 target=https://10.128.79.141:8081/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.460Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-controller-manager-operator/openshift-controller-manager-operator/0 target=https://10.128.48.110:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.485Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kube-scheduler/kube-scheduler/1 target=https://10.196.0.105:10259/metrics/resources msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.611Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/etcd/0 target=https://10.196.3.187:9979/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.674Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.35.46:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.754Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-kuryr/monitor-kuryr-cni/0 target=http://10.196.2.169:9655/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.887Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/thanos-sidecar/0 target=https://10.128.23.18:10902/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.911Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.2.169:9100/metrics msg="Scrape commit failed" err="write to WAL: log series: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:48.933Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/node-exporter/0 target=https://10.196.3.178:9100/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:49.032Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-multus/monitor-network/0 target=https://10.128.34.62:8443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:49.121Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-authentication/oauth-openshift/0 target=https://10.128.116.141:6443/metrics msg="Scrape commit failed" err="write to WAL: log samples: write /prometheus/wal/00000039: no space left on device" level=error ts=2022-10-13T09:51:49.216Z caller=scrape.go:1190 component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/grafana/0 target=https://10.128.22.230:3000/metrics msg="Scrape commit failed" err="